Path to dependency file: /samples/client/petstore/java/jersey1/build.gradle
Path to vulnerable library: /home/wss-scanner/.gradle/caches/modules-2/files-2.1/com.fasterxml.jackson.core/jackson-databind/2.6.4/f2abadd10891512268b16a1a1a6f81890f3e2976/jackson-databind-2.6.4.jar,/aches/modules-2/files-2.1/com.fasterxml.jackson.core/jackson-databind/2.6.4/f2abadd10891512268b16a1a1a6f81890f3e2976/jackson-databind-2.6.4.jar
Path to dependency file: /samples/client/petstore/scala/build.gradle
Path to vulnerable library: /home/wss-scanner/.gradle/caches/modules-2/files-2.1/com.fasterxml.jackson.core/jackson-databind/2.4.5/c69c0cb613128c69d84a6a0304ddb9fce82e8242/jackson-databind-2.4.5.jar,/home/wss-scanner/.gradle/caches/modules-2/files-2.1/com.fasterxml.jackson.core/jackson-databind/2.4.5/c69c0cb613128c69d84a6a0304ddb9fce82e8242/jackson-databind-2.4.5.jar
FasterXML jackson-databind 2.x before 2.9.10.8 mishandles the interaction between serialization gadgets and typing, related to org.apache.tomcat.dbcp.dbcp2.datasources.SharedPoolDataSource.
Path to dependency file: /samples/client/petstore/java/jersey1/build.gradle
Path to vulnerable library: /home/wss-scanner/.gradle/caches/modules-2/files-2.1/com.fasterxml.jackson.core/jackson-databind/2.6.4/f2abadd10891512268b16a1a1a6f81890f3e2976/jackson-databind-2.6.4.jar,/aches/modules-2/files-2.1/com.fasterxml.jackson.core/jackson-databind/2.6.4/f2abadd10891512268b16a1a1a6f81890f3e2976/jackson-databind-2.6.4.jar
Path to dependency file: /samples/client/petstore/scala/build.gradle
Path to vulnerable library: /home/wss-scanner/.gradle/caches/modules-2/files-2.1/com.fasterxml.jackson.core/jackson-databind/2.4.5/c69c0cb613128c69d84a6a0304ddb9fce82e8242/jackson-databind-2.4.5.jar,/home/wss-scanner/.gradle/caches/modules-2/files-2.1/com.fasterxml.jackson.core/jackson-databind/2.4.5/c69c0cb613128c69d84a6a0304ddb9fce82e8242/jackson-databind-2.4.5.jar
FasterXML jackson-databind 2.x before 2.9.10.8 mishandles the interaction between serialization gadgets and typing, related to org.apache.tomcat.dbcp.dbcp2.datasources.SharedPoolDataSource.
",0,cve high detected in multiple libraries cve high severity vulnerability vulnerable libraries jackson databind jar jackson databind jar jackson databind jar jackson databind jar jackson databind jar jackson databind jar general data binding functionality for jackson works on core streaming api library home page a href path to vulnerable library home wss scanner cache com fasterxml jackson core jackson databind bundles jackson databind jar dependency hierarchy lagom scaladsl api jar root library lagom api jar play jar x jackson databind jar vulnerable library jackson databind jar general data binding functionality for jackson works on core streaming api library home page a href path to vulnerable library home wss scanner cache com fasterxml jackson core jackson databind bundles jackson databind jar dependency hierarchy play guice jar root library play jar x jackson databind jar vulnerable library jackson databind jar general data binding functionality for jackson works on core streaming api library home page a href path to dependency file samples client petstore java build gradle path to vulnerable library home wss scanner gradle caches modules files com fasterxml jackson core jackson databind jackson databind jar aches modules files com fasterxml jackson core jackson databind jackson databind jar dependency hierarchy x jackson databind jar vulnerable library jackson databind jar general data binding functionality for jackson works on core streaming api library home page a href path to vulnerable library home wss scanner cache com fasterxml jackson core jackson databind bundles jackson databind jar dependency hierarchy finch circe jar root library circe jar x jackson databind jar vulnerable library jackson databind jar general data binding functionality for jackson works on core streaming api library home page a href path to dependency file samples client petstore scala build gradle path to vulnerable library home wss scanner gradle caches modules files com fasterxml jackson core jackson databind jackson databind jar home wss scanner gradle caches modules files com fasterxml jackson core jackson databind jackson databind jar dependency hierarchy swagger core jar root library x jackson databind jar vulnerable library found in head commit a href found in base branch master vulnerability details fasterxml jackson databind x before mishandles the interaction between serialization gadgets and typing related to org apache tomcat dbcp datasources sharedpooldatasource publish date url a href cvss score details base score metrics exploitability metrics attack vector network attack complexity high privileges required none user interaction none scope unchanged impact metrics confidentiality impact high integrity impact high availability impact high for more information on scores click a href suggested fix type upgrade version origin a href release date fix resolution com fasterxml jackson core jackson databind isopenpronvulnerability false ispackagebased true isdefaultbranch true packages istransitivedependency true dependencytree com lightbend lagom lagom scaladsl api com lightbend lagom lagom api com typesafe play play com fasterxml jackson core jackson databind isminimumfixversionavailable true minimumfixversion com fasterxml jackson core jackson databind isbinary false packagetype java groupid com fasterxml jackson core packagename jackson databind packageversion packagefilepaths istransitivedependency true dependencytree com typesafe play play guice com typesafe play play com fasterxml jackson core jackson databind isminimumfixversionavailable true minimumfixversion com fasterxml jackson core jackson databind isbinary false packagetype java groupid com fasterxml jackson core packagename jackson databind packageversion packagefilepaths istransitivedependency false dependencytree com fasterxml jackson core jackson databind isminimumfixversionavailable true minimumfixversion com fasterxml jackson core jackson databind isbinary false packagetype java groupid com fasterxml jackson core packagename jackson databind packageversion packagefilepaths istransitivedependency true dependencytree com github finagle finch circe io circe circe com fasterxml jackson core jackson databind isminimumfixversionavailable true minimumfixversion com fasterxml jackson core jackson databind isbinary false packagetype java groupid com fasterxml jackson core packagename jackson databind packageversion packagefilepaths istransitivedependency true dependencytree io swagger swagger core com fasterxml jackson core jackson databind isminimumfixversionavailable true minimumfixversion com fasterxml jackson core jackson databind isbinary false basebranches vulnerabilityidentifier cve vulnerabilitydetails fasterxml jackson databind x before mishandles the interaction between serialization gadgets and typing related to org apache tomcat dbcp datasources sharedpooldatasource vulnerabilityurl ,0
36534,15022780078.0,IssuesEvent,2021-02-01 17:22:14,hashicorp/terraform-provider-aws,https://api.github.com/repos/hashicorp/terraform-provider-aws,closed,Getting unknown block type advanced_backup setting ,service/backup,"I am using terraform v0.12.14
I am getting unknown block type **advanced_backup_setting** while using it in my tf code. Can someone please help. I have verified it is using provider.aws version 3.26
resource ""aws_backup_plan"" ""test_backup"" {
name = ""test_backup_plan""
rule {
rule_name = ""test_backup_rule""
target_vault_name = aws_backup_vault.test.name
schedule = ""cron(0 16 * * ? *)""
}
advanced_backup_setting {
backup_options = {
WindowsVSS = ""enabled""
}
resource_type = ""EC2""
}
}
",1.0,"Getting unknown block type advanced_backup setting - I am using terraform v0.12.14
I am getting unknown block type **advanced_backup_setting** while using it in my tf code. Can someone please help. I have verified it is using provider.aws version 3.26
resource ""aws_backup_plan"" ""test_backup"" {
name = ""test_backup_plan""
rule {
rule_name = ""test_backup_rule""
target_vault_name = aws_backup_vault.test.name
schedule = ""cron(0 16 * * ? *)""
}
advanced_backup_setting {
backup_options = {
WindowsVSS = ""enabled""
}
resource_type = ""EC2""
}
}
",0,getting unknown block type advanced backup setting i am using terraform i am getting unknown block type advanced backup setting while using it in my tf code can someone please help i have verified it is using provider aws version resource aws backup plan test backup name test backup plan rule rule name test backup rule target vault name aws backup vault test name schedule cron advanced backup setting backup options windowsvss enabled resource type ,0
99693,8708527890.0,IssuesEvent,2018-12-06 11:10:15,KratosMultiphysics/Kratos,https://api.github.com/repos/KratosMultiphysics/Kratos,closed,[Ttesting][Adjoint] Missing header here (with the license and the author),Kratos Core Licencing Testing,"https://github.com/KratosMultiphysics/Kratos/blob/c5b413682acd83040b72e26ab64eab21139bfac6/kratos/tests/strategies/schemes/test_residual_based_adjoint_bossak_scheme.cpp#L1
Besides, wait until the cpp tests are moved to a common folder",1.0,"[Ttesting][Adjoint] Missing header here (with the license and the author) - https://github.com/KratosMultiphysics/Kratos/blob/c5b413682acd83040b72e26ab64eab21139bfac6/kratos/tests/strategies/schemes/test_residual_based_adjoint_bossak_scheme.cpp#L1
Besides, wait until the cpp tests are moved to a common folder",0, missing header here with the license and the author besides wait until the cpp tests are moved to a common folder,0
5880,21553443207.0,IssuesEvent,2022-04-30 02:43:07,o3de/o3de,https://api.github.com/repos/o3de/o3de,opened,AR Bug Report - Test AutomatedTesting::MultiplayerTests_Main failed,kind/bug needs-triage kind/automation,"**Describe the bug**
`AutomatedTesting::MultiplayerTests_Main.main::TEST_RUN` failed while running AR
**Failed Jenkins Job Information:**
platform linux -- Python 3.7.12, pytest-5.3.2, py-1.11.0, pluggy-0.13.1 -- /data/workspace/o3de/python/runtime/python-3.7.12-rev2-linux/python/bin/python
```
[2022-04-29T10:46:42.179Z] E Failed: Test test_Multiplayer_AutoComponent_NetworkInput:
[2022-04-29T10:46:42.179Z] E Test FAILED
[2022-04-29T10:46:42.179Z] E ------------
[2022-04-29T10:46:42.179Z] E | Output |
[2022-04-29T10:46:42.179Z] E ------------
[2022-04-29T10:46:42.179Z] E Starting test Multiplayer_AutoComponent_NetworkInput...
[2022-04-29T10:46:42.179Z] E Test Multiplayer_AutoComponent_NetworkInput finished.
[2022-04-29T10:46:42.179Z] E Report:
[2022-04-29T10:46:42.179Z] E [SUCCESS] Success: Unexpected line not found: LaunchEditorServer failed! The ServerLauncher binary is missing!
[2022-04-29T10:46:42.179Z] E [SUCCESS] Success: Found expected line: Editor has connected to the editor-server.
[2022-04-29T10:46:42.179Z] E [SUCCESS] Success: Found expected line: Editor is sending the editor-server the level data packet.
[2022-04-29T10:46:42.179Z] E [FAILED ] Failure: Failed to find expected line: Logger: Editor Server completed receiving the editor's level assets, responding to Editor...
[2022-04-29T10:46:42.179Z] E EXCEPTION raised:
[2022-04-29T10:46:42.179Z] E Traceback (most recent call last):
[2022-04-29T10:46:42.179Z] E File ""/data/workspace/o3de/AutomatedTesting/Gem/PythonTests/EditorPythonTestTools/editor_python_test_tools/utils.py"", line 305, in start_test
[2022-04-29T10:46:42.179Z] E test_function()
[2022-04-29T10:46:42.179Z] E File ""/data/workspace/o3de/AutomatedTesting/Gem/PythonTests/Multiplayer/tests/Multiplayer_AutoComponent_NetworkInput.py"", line 57, in Multiplayer_AutoComponent_NetworkInput
[2022-04-29T10:46:42.179Z] E helper.multiplayer_enter_game_mode(Tests.enter_game_mode)
[2022-04-29T10:46:42.179Z] E File ""/data/workspace/o3de/AutomatedTesting/Gem/PythonTests/EditorPythonTestTools/editor_python_test_tools/utils.py"", line 178, in multiplayer_enter_game_mode
[2022-04-29T10:46:42.179Z] E TestHelper.succeed_if_log_line_found(""EditorServer"", ""Logger: Editor Server completed receiving the editor's level assets, responding to Editor..."", section_tracer.prints, 5.0)
[2022-04-29T10:46:42.179Z] E File ""/data/workspace/o3de/AutomatedTesting/Gem/PythonTests/EditorPythonTestTools/editor_python_test_tools/utils.py"", line 138, in succeed_if_log_line_found
[2022-04-29T10:46:42.179Z] E Report.critical_result((""Found expected line: "" + line, ""Failed to find expected line: "" + line), TestHelper.find_line(window, line, print_infos))
[2022-04-29T10:46:42.179Z] E File ""/data/workspace/o3de/AutomatedTesting/Gem/PythonTests/EditorPythonTestTools/editor_python_test_tools/utils.py"", line 410, in critical_result
[2022-04-29T10:46:42.179Z] E TestHelper.fail_fast(fast_fail_message)
[2022-04-29T10:46:42.179Z] E File ""/data/workspace/o3de/AutomatedTesting/Gem/PythonTests/EditorPythonTestTools/editor_python_test_tools/utils.py"", line 213, in fail_fast
[2022-04-29T10:46:42.179Z] E raise FailFast()
[2022-04-29T10:46:42.179Z] E editor_python_test_tools.utils.FailFast
[2022-04-29T10:46:42.179Z] E Test result: FAILURE
```
**Attachments**
[log.txt](https://github.com/o3de/o3de/files/8595521/log.txt)
",1.0,"AR Bug Report - Test AutomatedTesting::MultiplayerTests_Main failed - **Describe the bug**
`AutomatedTesting::MultiplayerTests_Main.main::TEST_RUN` failed while running AR
**Failed Jenkins Job Information:**
platform linux -- Python 3.7.12, pytest-5.3.2, py-1.11.0, pluggy-0.13.1 -- /data/workspace/o3de/python/runtime/python-3.7.12-rev2-linux/python/bin/python
```
[2022-04-29T10:46:42.179Z] E Failed: Test test_Multiplayer_AutoComponent_NetworkInput:
[2022-04-29T10:46:42.179Z] E Test FAILED
[2022-04-29T10:46:42.179Z] E ------------
[2022-04-29T10:46:42.179Z] E | Output |
[2022-04-29T10:46:42.179Z] E ------------
[2022-04-29T10:46:42.179Z] E Starting test Multiplayer_AutoComponent_NetworkInput...
[2022-04-29T10:46:42.179Z] E Test Multiplayer_AutoComponent_NetworkInput finished.
[2022-04-29T10:46:42.179Z] E Report:
[2022-04-29T10:46:42.179Z] E [SUCCESS] Success: Unexpected line not found: LaunchEditorServer failed! The ServerLauncher binary is missing!
[2022-04-29T10:46:42.179Z] E [SUCCESS] Success: Found expected line: Editor has connected to the editor-server.
[2022-04-29T10:46:42.179Z] E [SUCCESS] Success: Found expected line: Editor is sending the editor-server the level data packet.
[2022-04-29T10:46:42.179Z] E [FAILED ] Failure: Failed to find expected line: Logger: Editor Server completed receiving the editor's level assets, responding to Editor...
[2022-04-29T10:46:42.179Z] E EXCEPTION raised:
[2022-04-29T10:46:42.179Z] E Traceback (most recent call last):
[2022-04-29T10:46:42.179Z] E File ""/data/workspace/o3de/AutomatedTesting/Gem/PythonTests/EditorPythonTestTools/editor_python_test_tools/utils.py"", line 305, in start_test
[2022-04-29T10:46:42.179Z] E test_function()
[2022-04-29T10:46:42.179Z] E File ""/data/workspace/o3de/AutomatedTesting/Gem/PythonTests/Multiplayer/tests/Multiplayer_AutoComponent_NetworkInput.py"", line 57, in Multiplayer_AutoComponent_NetworkInput
[2022-04-29T10:46:42.179Z] E helper.multiplayer_enter_game_mode(Tests.enter_game_mode)
[2022-04-29T10:46:42.179Z] E File ""/data/workspace/o3de/AutomatedTesting/Gem/PythonTests/EditorPythonTestTools/editor_python_test_tools/utils.py"", line 178, in multiplayer_enter_game_mode
[2022-04-29T10:46:42.179Z] E TestHelper.succeed_if_log_line_found(""EditorServer"", ""Logger: Editor Server completed receiving the editor's level assets, responding to Editor..."", section_tracer.prints, 5.0)
[2022-04-29T10:46:42.179Z] E File ""/data/workspace/o3de/AutomatedTesting/Gem/PythonTests/EditorPythonTestTools/editor_python_test_tools/utils.py"", line 138, in succeed_if_log_line_found
[2022-04-29T10:46:42.179Z] E Report.critical_result((""Found expected line: "" + line, ""Failed to find expected line: "" + line), TestHelper.find_line(window, line, print_infos))
[2022-04-29T10:46:42.179Z] E File ""/data/workspace/o3de/AutomatedTesting/Gem/PythonTests/EditorPythonTestTools/editor_python_test_tools/utils.py"", line 410, in critical_result
[2022-04-29T10:46:42.179Z] E TestHelper.fail_fast(fast_fail_message)
[2022-04-29T10:46:42.179Z] E File ""/data/workspace/o3de/AutomatedTesting/Gem/PythonTests/EditorPythonTestTools/editor_python_test_tools/utils.py"", line 213, in fail_fast
[2022-04-29T10:46:42.179Z] E raise FailFast()
[2022-04-29T10:46:42.179Z] E editor_python_test_tools.utils.FailFast
[2022-04-29T10:46:42.179Z] E Test result: FAILURE
```
**Attachments**
[log.txt](https://github.com/o3de/o3de/files/8595521/log.txt)
",1,ar bug report test automatedtesting multiplayertests main failed describe the bug automatedtesting multiplayertests main main test run failed while running ar failed jenkins job information platform linux python pytest py pluggy data workspace python runtime python linux python bin python e failed test test multiplayer autocomponent networkinput e test failed e e output e e starting test multiplayer autocomponent networkinput e test multiplayer autocomponent networkinput finished e report e success unexpected line not found launcheditorserver failed the serverlauncher binary is missing e success found expected line editor has connected to the editor server e success found expected line editor is sending the editor server the level data packet e failure failed to find expected line logger editor server completed receiving the editor s level assets responding to editor e exception raised e traceback most recent call last e file data workspace automatedtesting gem pythontests editorpythontesttools editor python test tools utils py line in start test e test function e file data workspace automatedtesting gem pythontests multiplayer tests multiplayer autocomponent networkinput py line in multiplayer autocomponent networkinput e helper multiplayer enter game mode tests enter game mode e file data workspace automatedtesting gem pythontests editorpythontesttools editor python test tools utils py line in multiplayer enter game mode e testhelper succeed if log line found editorserver logger editor server completed receiving the editor s level assets responding to editor section tracer prints e file data workspace automatedtesting gem pythontests editorpythontesttools editor python test tools utils py line in succeed if log line found e report critical result found expected line line failed to find expected line line testhelper find line window line print infos e file data workspace automatedtesting gem pythontests editorpythontesttools editor python test tools utils py line in critical result e testhelper fail fast fast fail message e file data workspace automatedtesting gem pythontests editorpythontesttools editor python test tools utils py line in fail fast e raise failfast e editor python test tools utils failfast e test result failure attachments ,1
9271,27832546230.0,IssuesEvent,2023-03-20 06:46:44,pingcap/tiflow,https://api.github.com/repos/pingcap/tiflow,closed,Table sink close stucks and CDC changefeed not advance ,type/bug severity/critical found/automation area/ticdc affects-6.5 affects-6.6,"### What did you do?
1. create mysql changefeed
2. Run workload tpcc prepare
3. scale out tikv from 3 to 7
4. scale in cdc from 3 to 1 node
5. tpcc run, and at the same time, scale out cdc from 3 to 6, and scale tikv in from 7 to 3.
6. wait checkpoint advance
### What did you expect to see?
changefeed advances
### What did you see instead?
changefeed not advances


### Versions of the cluster
cdc version:
[""Welcome to Change Data Capture (CDC)""] [release-version=v6.7.0-alpha] [git-hash=04f7e22aaa07dfedff853fe9b7a08675bbbf0fe1] [git-branch=heads/refs/tags/v6.7.0-alpha] [utc-build-time=""2023-03-11 11:32:23""] [go-version=""go version go1.20.2 linux/amd64""] [failpoint-build=false]
",1.0,"Table sink close stucks and CDC changefeed not advance - ### What did you do?
1. create mysql changefeed
2. Run workload tpcc prepare
3. scale out tikv from 3 to 7
4. scale in cdc from 3 to 1 node
5. tpcc run, and at the same time, scale out cdc from 3 to 6, and scale tikv in from 7 to 3.
6. wait checkpoint advance
### What did you expect to see?
changefeed advances
### What did you see instead?
changefeed not advances


### Versions of the cluster
cdc version:
[""Welcome to Change Data Capture (CDC)""] [release-version=v6.7.0-alpha] [git-hash=04f7e22aaa07dfedff853fe9b7a08675bbbf0fe1] [git-branch=heads/refs/tags/v6.7.0-alpha] [utc-build-time=""2023-03-11 11:32:23""] [go-version=""go version go1.20.2 linux/amd64""] [failpoint-build=false]
",1,table sink close stucks and cdc changefeed not advance what did you do create mysql changefeed run workload tpcc prepare scale out tikv from to scale in cdc from to node tpcc run and at the same time scale out cdc from to and scale tikv in from to wait checkpoint advance what did you expect to see changefeed advances what did you see instead changefeed not advances versions of the cluster cdc version ,1
1269,9815410570.0,IssuesEvent,2019-06-13 12:35:58,elastic/apm-integration-testing,https://api.github.com/repos/elastic/apm-integration-testing,closed,Support for multibranch pipeline,[zube]: In Review automation,"As an effort to standardise CI/CD work across the Observability teams, alongside using the JJBB framework, we want to add support for Jenkins' multibranch pipelines.",1.0,"Support for multibranch pipeline - As an effort to standardise CI/CD work across the Observability teams, alongside using the JJBB framework, we want to add support for Jenkins' multibranch pipelines.",1,support for multibranch pipeline as an effort to standardise ci cd work across the observability teams alongside using the jjbb framework we want to add support for jenkins multibranch pipelines ,1
7468,24944522212.0,IssuesEvent,2022-10-31 22:13:21,ericcornelissen/webmangler,https://api.github.com/repos/ericcornelissen/webmangler,closed,Address use of deprecated GitHub Actions command `set-output`,automation,"# Task
## Description
All GitHub Actions workflows should be updated to not rely on `set-output` or `save-state` which, per the following warnings that may be seen on current workflow runs (example), have been deprecated:
> The `set-output` command is deprecated and will be disabled soon. Please upgrade to using Environment Files. For more information see: https://github.blog/changelog/2022-10-11-github-actions-deprecating-save-state-and-set-output-commands/
At the time of writing, this warning can be seen on runs of the following workflows of this project:
- [`code-analysis.yml`](https://github.com/ericcornelissen/webmangler/blob/7999a8bd1c4e653740f78715b3573d04fc39fa4e/.github/workflows/code-analysis.yml)
- [`code-checks.yml`](https://github.com/ericcornelissen/webmangler/blob/7999a8bd1c4e653740f78715b3573d04fc39fa4e/.github/workflows/code-checks.yml)
### `code-analysis.yml`
This is due to the use of `@actions/core@v1.9.1` in v2.1.27 of the CodeQL Action. This is already fixed in the CodeQL Action, but not yet released. This will be addressed automatically by the regular Renovate Pull Requests.
### `code-checks.yml`
This is due to the use of `echo ""::set-output name=xxx::yyy""` in embedded scripts in the [""Determine jobs"" job](https://github.com/ericcornelissen/webmangler/blob/7999a8bd1c4e653740f78715b3573d04fc39fa4e/.github/workflows/code-checks.yml#L11-L107). This must be addressed manually.
#### Suggested solution
Based on https://github.com/github/codeql-action/compare/v2.1.27...v2.1.28 it looks like changing the workflow as follows _should_ work:
```diff
- echo ""::set-output name=xxx::$yyy""
+ echo ""xxx=$yyy"" >> $GITHUB_OUTPUT
```
## Related
- #152
- #392
- #399
- #412",1.0,"Address use of deprecated GitHub Actions command `set-output` - # Task
## Description
All GitHub Actions workflows should be updated to not rely on `set-output` or `save-state` which, per the following warnings that may be seen on current workflow runs (example), have been deprecated:
> The `set-output` command is deprecated and will be disabled soon. Please upgrade to using Environment Files. For more information see: https://github.blog/changelog/2022-10-11-github-actions-deprecating-save-state-and-set-output-commands/
At the time of writing, this warning can be seen on runs of the following workflows of this project:
- [`code-analysis.yml`](https://github.com/ericcornelissen/webmangler/blob/7999a8bd1c4e653740f78715b3573d04fc39fa4e/.github/workflows/code-analysis.yml)
- [`code-checks.yml`](https://github.com/ericcornelissen/webmangler/blob/7999a8bd1c4e653740f78715b3573d04fc39fa4e/.github/workflows/code-checks.yml)
### `code-analysis.yml`
This is due to the use of `@actions/core@v1.9.1` in v2.1.27 of the CodeQL Action. This is already fixed in the CodeQL Action, but not yet released. This will be addressed automatically by the regular Renovate Pull Requests.
### `code-checks.yml`
This is due to the use of `echo ""::set-output name=xxx::yyy""` in embedded scripts in the [""Determine jobs"" job](https://github.com/ericcornelissen/webmangler/blob/7999a8bd1c4e653740f78715b3573d04fc39fa4e/.github/workflows/code-checks.yml#L11-L107). This must be addressed manually.
#### Suggested solution
Based on https://github.com/github/codeql-action/compare/v2.1.27...v2.1.28 it looks like changing the workflow as follows _should_ work:
```diff
- echo ""::set-output name=xxx::$yyy""
+ echo ""xxx=$yyy"" >> $GITHUB_OUTPUT
```
## Related
- #152
- #392
- #399
- #412",1,address use of deprecated github actions command set output task description all github actions workflows should be updated to not rely on set output or save state which per the following warnings that may be seen on current workflow runs example have been deprecated the set output command is deprecated and will be disabled soon please upgrade to using environment files for more information see at the time of writing this warning can be seen on runs of the following workflows of this project code analysis yml this is due to the use of actions core in of the codeql action this is already fixed in the codeql action but not yet released this will be addressed automatically by the regular renovate pull requests code checks yml this is due to the use of echo set output name xxx yyy in embedded scripts in the this must be addressed manually suggested solution based on it looks like changing the workflow as follows should work diff echo set output name xxx yyy echo xxx yyy github output related ,1
7388,24767699872.0,IssuesEvent,2022-10-22 18:48:26,surge-synthesizer/surge,https://api.github.com/repos/surge-synthesizer/surge,closed,Live 11 Parameter Feedback,Host Specific Host Automation Bug Report,"XT in Live 11 seems to have slider drag events fight with the automation recording
Just you know, start live 11 and drag a slider and it wobbles
Sometimes this manifests itself as slider resets to 0 suddenly in some sort of cascade
This is clearly something in the notify / return value loop in live while dragging and begin/end not suppressing returns or some such but we need to debug it for our next release.",1.0,"Live 11 Parameter Feedback - XT in Live 11 seems to have slider drag events fight with the automation recording
Just you know, start live 11 and drag a slider and it wobbles
Sometimes this manifests itself as slider resets to 0 suddenly in some sort of cascade
This is clearly something in the notify / return value loop in live while dragging and begin/end not suppressing returns or some such but we need to debug it for our next release.",1,live parameter feedback xt in live seems to have slider drag events fight with the automation recording just you know start live and drag a slider and it wobbles sometimes this manifests itself as slider resets to suddenly in some sort of cascade this is clearly something in the notify return value loop in live while dragging and begin end not suppressing returns or some such but we need to debug it for our next release ,1
9936,30783065865.0,IssuesEvent,2023-07-31 11:24:36,deinstapel/eks-rolling-update,https://api.github.com/repos/deinstapel/eks-rolling-update,opened,GitHub actions from forks will currently fail,automation,"Creating a PR from a fork will currently fail in github actions. This is due to the `github.actor` being unauthorized to push to the container registry. As we manually approve PRs as safe before running github actions, we should use a dedicated bot account with PAT instead.",1.0,"GitHub actions from forks will currently fail - Creating a PR from a fork will currently fail in github actions. This is due to the `github.actor` being unauthorized to push to the container registry. As we manually approve PRs as safe before running github actions, we should use a dedicated bot account with PAT instead.",1,github actions from forks will currently fail creating a pr from a fork will currently fail in github actions this is due to the github actor being unauthorized to push to the container registry as we manually approve prs as safe before running github actions we should use a dedicated bot account with pat instead ,1
1488,10197698115.0,IssuesEvent,2019-08-13 01:39:25,askmench/mench-web-app,https://api.github.com/repos/askmench/mench-web-app,closed,Enable Users to Clear Action Plan Progression Links,Bot/Chat-Automation Inputs/Forms,So they can reset their mistakes and re-take all or parts of their Action Plan intentions.,1.0,Enable Users to Clear Action Plan Progression Links - So they can reset their mistakes and re-take all or parts of their Action Plan intentions.,1,enable users to clear action plan progression links so they can reset their mistakes and re take all or parts of their action plan intentions ,1
5829,21333609143.0,IssuesEvent,2022-04-18 11:51:45,red-hat-storage/ocs-ci,https://api.github.com/repos/red-hat-storage/ocs-ci,opened,Wait for project selection is namespace drop-down on OCP console,bug ui_automation,"Issue seen with Run `1649886578`
Also take screenshots on important pages.",1.0,"Wait for project selection is namespace drop-down on OCP console - Issue seen with Run `1649886578`
Also take screenshots on important pages.",1,wait for project selection is namespace drop down on ocp console issue seen with run also take screenshots on important pages ,1
48593,10263500707.0,IssuesEvent,2019-08-22 14:29:11,OrifInformatique/gestion_questionnaires,https://api.github.com/repos/OrifInformatique/gestion_questionnaires,closed,Charger uniquement les modèles les plus utilisés au lancement,code enhancement,"Dans la plupart des controlleurs, la fonction `__construct` charge tous les modèles utilisés, même s'ils ne sont utilisés qu'une seule fois dans le controlleur.
Il serait probablement préférable de ne charger que ceux qui sont utilisés souvent dans `__construct` et les autres dans les fonctions où ils sont utilisés.",1.0,"Charger uniquement les modèles les plus utilisés au lancement - Dans la plupart des controlleurs, la fonction `__construct` charge tous les modèles utilisés, même s'ils ne sont utilisés qu'une seule fois dans le controlleur.
Il serait probablement préférable de ne charger que ceux qui sont utilisés souvent dans `__construct` et les autres dans les fonctions où ils sont utilisés.",0,charger uniquement les modèles les plus utilisés au lancement dans la plupart des controlleurs la fonction construct charge tous les modèles utilisés même s ils ne sont utilisés qu une seule fois dans le controlleur il serait probablement préférable de ne charger que ceux qui sont utilisés souvent dans construct et les autres dans les fonctions où ils sont utilisés ,0
336240,24490353024.0,IssuesEvent,2022-10-10 00:24:39,datajoint/datajoint-elements,https://api.github.com/repos/datajoint/datajoint-elements,closed,Add documentation on release process,documentation,"- Add versions of acquisition and analysis packages to release notes.
- Add this documentation within the `Management` section.",1.0,"Add documentation on release process - - Add versions of acquisition and analysis packages to release notes.
- Add this documentation within the `Management` section.",0,add documentation on release process add versions of acquisition and analysis packages to release notes add this documentation within the management section ,0
392353,26935956376.0,IssuesEvent,2023-02-07 20:40:33,onflow/flow-cli,https://api.github.com/repos/onflow/flow-cli,closed,Include a clarification about how to provide the arguments of a transaction like a json file,Documentation,"### Issue To Be Solved
How to pass a json file for providing the arguments of a transaction using flow-cli is not documented
### Suggest A Solution
Update the [docs](https://developers.flow.com/tools/flow-cli/send-signed-transactions) with clear instructions about how to do it
`flow transactions send {filename} --args-json ""$(cat myfile.json)""` || `flow transactions send {filename}""$(wget -O- -q https://raw.githubusercontent.com/onflow/some-flow-repo/some-commit-hash-or-tag/path-to/arguments.json)"" `",1.0,"Include a clarification about how to provide the arguments of a transaction like a json file - ### Issue To Be Solved
How to pass a json file for providing the arguments of a transaction using flow-cli is not documented
### Suggest A Solution
Update the [docs](https://developers.flow.com/tools/flow-cli/send-signed-transactions) with clear instructions about how to do it
`flow transactions send {filename} --args-json ""$(cat myfile.json)""` || `flow transactions send {filename}""$(wget -O- -q https://raw.githubusercontent.com/onflow/some-flow-repo/some-commit-hash-or-tag/path-to/arguments.json)"" `",0,include a clarification about how to provide the arguments of a transaction like a json file issue to be solved how to pass a json file for providing the arguments of a transaction using flow cli is not documented suggest a solution update the with clear instructions about how to do it flow transactions send filename args json cat myfile json flow transactions send filename wget o q ,0
8782,27172243566.0,IssuesEvent,2023-02-17 20:35:23,OneDrive/onedrive-api-docs,https://api.github.com/repos/OneDrive/onedrive-api-docs,closed,"Embeddable URLs from the ""driveItem: preview"" endpoint fail to load ~20% of the time.",status:investigating Needs: Attention :wave: area:Previewers automation:Closed,"
#### Category
- [ ] Question
- [ ] Documentation issue
- [ x] Bug
#### Expected or Desired Behavior
Embeddable URL that should display the contents of the sharepoint office file in the web browser (as ready only).
#### Observed Behavior
Fails about 20% of the time with JS errors on the viewer (through the link returned). When this occurs, the viewer does not load, and the page needs to be refreshed.
#### Steps to Reproduce
Call the ""driveItem: preview"" endpoint, receive an embeddable link to display within an iframe. Fails to load ~20% of the time.
Thank you.
",1.0,"Embeddable URLs from the ""driveItem: preview"" endpoint fail to load ~20% of the time. -
#### Category
- [ ] Question
- [ ] Documentation issue
- [ x] Bug
#### Expected or Desired Behavior
Embeddable URL that should display the contents of the sharepoint office file in the web browser (as ready only).
#### Observed Behavior
Fails about 20% of the time with JS errors on the viewer (through the link returned). When this occurs, the viewer does not load, and the page needs to be refreshed.
#### Steps to Reproduce
Call the ""driveItem: preview"" endpoint, receive an embeddable link to display within an iframe. Fails to load ~20% of the time.
Thank you.
",1,embeddable urls from the driveitem preview endpoint fail to load of the time category question documentation issue bug expected or desired behavior embeddable url that should display the contents of the sharepoint office file in the web browser as ready only observed behavior fails about of the time with js errors on the viewer through the link returned when this occurs the viewer does not load and the page needs to be refreshed steps to reproduce call the driveitem preview endpoint receive an embeddable link to display within an iframe fails to load of the time thank you ,1
5755,20981764623.0,IssuesEvent,2022-03-28 20:42:38,willowtreeapps/vocable-ios,https://api.github.com/repos/willowtreeapps/vocable-ios,closed,[Test] Re-enable CustomCategoriesTests,automation,"We accidentally disabled the CustomCategoriesTests somehow. We should re-enable them so that we can get the test results during builds.
AC:
All tests in CustomCategoriesTests are enabled and run during builds.
",1.0,"[Test] Re-enable CustomCategoriesTests - We accidentally disabled the CustomCategoriesTests somehow. We should re-enable them so that we can get the test results during builds.
AC:
All tests in CustomCategoriesTests are enabled and run during builds.
",1, re enable customcategoriestests we accidentally disabled the customcategoriestests somehow we should re enable them so that we can get the test results during builds ac all tests in customcategoriestests are enabled and run during builds ,1
350149,31857391762.0,IssuesEvent,2023-09-15 08:28:28,microsoft/AzureStorageExplorer,https://api.github.com/repos/microsoft/AzureStorageExplorer,opened,Only 1000 entities are loaded when sorting by one column if there is a query clause under query mode,🧪 testing :gear: tables,"**Storage Explorer Version:** 1.32.0-dev (93)
**Build Number:** 20230915.2
**Branch:** main
**Platform/OS:** Windows 10/Linux Ubuntu 20.04/MacOS Ventura 13.5.2 (Apple M1 Pro)
**Architecture:** x64/x64/x64
**How Found:** Exploratory testing
**Regression From:** Not a regression
## Steps to Reproduce ##
1. Open 'Settings' -> Enable the setting 'Global Sort'.
2. Expand one storage account -> Tables.
3. Open one table which contains more than 1000 entities.
4. Click 'Query' -> Make sure there is at least one query clause.
5. Sort entities by one column.
6. Check whether all entities are loaded.
## Expected Experience ##
All entities are loaded.
## Actual Experience ##
Only 1000 entities are loaded.
## Additional Context ##
1. This issue doesn't reproduce when there is no query clause.
2. Here is the record:

",1.0,"Only 1000 entities are loaded when sorting by one column if there is a query clause under query mode - **Storage Explorer Version:** 1.32.0-dev (93)
**Build Number:** 20230915.2
**Branch:** main
**Platform/OS:** Windows 10/Linux Ubuntu 20.04/MacOS Ventura 13.5.2 (Apple M1 Pro)
**Architecture:** x64/x64/x64
**How Found:** Exploratory testing
**Regression From:** Not a regression
## Steps to Reproduce ##
1. Open 'Settings' -> Enable the setting 'Global Sort'.
2. Expand one storage account -> Tables.
3. Open one table which contains more than 1000 entities.
4. Click 'Query' -> Make sure there is at least one query clause.
5. Sort entities by one column.
6. Check whether all entities are loaded.
## Expected Experience ##
All entities are loaded.
## Actual Experience ##
Only 1000 entities are loaded.
## Additional Context ##
1. This issue doesn't reproduce when there is no query clause.
2. Here is the record:

",0,only entities are loaded when sorting by one column if there is a query clause under query mode storage explorer version dev build number branch main platform os windows linux ubuntu macos ventura apple pro architecture how found exploratory testing regression from not a regression steps to reproduce open settings enable the setting global sort expand one storage account tables open one table which contains more than entities click query make sure there is at least one query clause sort entities by one column check whether all entities are loaded expected experience all entities are loaded actual experience only entities are loaded additional context this issue doesn t reproduce when there is no query clause here is the record ,0
272092,20732962109.0,IssuesEvent,2022-03-14 11:08:12,Arquisoft/dede_en2b,https://api.github.com/repos/Arquisoft/dede_en2b,closed,Contribution for deliverable 1 - Diego Martín,documentation v0.1,"I have developed part of point 1 and 4 of the documentation, the last one to be expanded. Moreover I was in charge of writting the topics discussed on the second reunion. And last but not least, I checked beforehand the instalation of the software and helped my teammates, since some of them had trouble with npm and ruby.",1.0,"Contribution for deliverable 1 - Diego Martín - I have developed part of point 1 and 4 of the documentation, the last one to be expanded. Moreover I was in charge of writting the topics discussed on the second reunion. And last but not least, I checked beforehand the instalation of the software and helped my teammates, since some of them had trouble with npm and ruby.",0,contribution for deliverable diego martín i have developed part of point and of the documentation the last one to be expanded moreover i was in charge of writting the topics discussed on the second reunion and last but not least i checked beforehand the instalation of the software and helped my teammates since some of them had trouble with npm and ruby ,0
42905,5545726310.0,IssuesEvent,2017-03-22 22:21:22,gustafl/youtube-blacklist-chrome-extension,https://api.github.com/repos/gustafl/youtube-blacklist-chrome-extension,opened,Design the popup,design,"It should contain the following, in order:
* Logo
* Title text
* Version number
* Disable/enable toggle button
* Number of blacklisted users
* Number of comments hidden on page
* Link to extension in Web Store",1.0,"Design the popup - It should contain the following, in order:
* Logo
* Title text
* Version number
* Disable/enable toggle button
* Number of blacklisted users
* Number of comments hidden on page
* Link to extension in Web Store",0,design the popup it should contain the following in order logo title text version number disable enable toggle button number of blacklisted users number of comments hidden on page link to extension in web store,0
171720,27168183667.0,IssuesEvent,2023-02-17 16:58:18,Lightning-AI/lightning,https://api.github.com/repos/Lightning-AI/lightning,closed,"Is `precision=""mixed""` redundant?",refactor design precision: amp,"## Proposed refactoring or deprecation
Does `precision=""mixed""` act differently to `precision=16` in any way?
I understand that ""mixed"" is more correct as 16-bit precision can still run some computations in 32-bit.
### Motivation
In https://github.com/PyTorchLightning/pytorch-lightning/pull/9763 I noticed we did not even have a `PrecisionType` for `""mixed""`.
There's a single test in the codebase passing the ""mixed"" value:
https://github.com/PyTorchLightning/pytorch-lightning/blob/master/tests/plugins/test_deepspeed_plugin.py#L153
And no mention at all of this value in the docs.
### Pitch
Have one value to set this, whether it is `16` or `""mixed""`. Most likely `16` since its the one widely used.
Otherwise, add tests for passing `""mixed""`
### Additional context
```python
$ grep -iIrn '""mixed""' pytorch_lightning
pytorch_lightning/plugins/training_type/sharded.py:62: is_fp16 = precision in (""mixed"", 16)
pytorch_lightning/plugins/training_type/fully_sharded.py:135: mixed_precision=precision == ""mixed"",
pytorch_lightning/plugins/training_type/ipu.py:42: if self.precision in (""mixed"", 16):
pytorch_lightning/plugins/training_type/deepspeed.py:405: dtype = torch.float16 if self.precision in (16, ""mixed"") else torch.float32
pytorch_lightning/plugins/training_type/deepspeed.py:473: dtype = torch.float16 if self.precision in (16, ""mixed"") else torch.float32
pytorch_lightning/plugins/training_type/deepspeed.py:602: if precision in (16, ""mixed""):
pytorch_lightning/plugins/precision/mixed.py:26: precision: Union[str, int] = ""mixed""
pytorch_lightning/plugins/precision/fully_sharded_native_amp.py:26: precision = ""mixed""
```
```python
$ grep -iIrn '""mixed""' tests
tests/plugins/test_deepspeed_plugin.py:153:@pytest.mark.parametrize(""precision"", [16, ""mixed""])
```
```python
$ grep -Irn 'mixed' docs | grep 'precision='
# no mention in the docs!
```
______________________________________________________________________
#### If you enjoy Lightning, check out our other projects! ⚡
- [**Metrics**](https://github.com/PyTorchLightning/metrics): Machine learning metrics for distributed, scalable PyTorch applications.
- [**Flash**](https://github.com/PyTorchLightning/lightning-flash): The fastest way to get a Lightning baseline! A collection of tasks for fast prototyping, baselining, finetuning and solving problems with deep learning
- [**Bolts**](https://github.com/PyTorchLightning/lightning-bolts): Pretrained SOTA Deep Learning models, callbacks and more for research and production with PyTorch Lightning and PyTorch
- [**Lightning Transformers**](https://github.com/PyTorchLightning/lightning-transformers): Flexible interface for high performance research using SOTA Transformers leveraging Pytorch Lightning, Transformers, and Hydra.
cc @justusschock @awaelchli @akihironitta @rohitgr7 @tchaton @borda @carmocca",1.0,"Is `precision=""mixed""` redundant? - ## Proposed refactoring or deprecation
Does `precision=""mixed""` act differently to `precision=16` in any way?
I understand that ""mixed"" is more correct as 16-bit precision can still run some computations in 32-bit.
### Motivation
In https://github.com/PyTorchLightning/pytorch-lightning/pull/9763 I noticed we did not even have a `PrecisionType` for `""mixed""`.
There's a single test in the codebase passing the ""mixed"" value:
https://github.com/PyTorchLightning/pytorch-lightning/blob/master/tests/plugins/test_deepspeed_plugin.py#L153
And no mention at all of this value in the docs.
### Pitch
Have one value to set this, whether it is `16` or `""mixed""`. Most likely `16` since its the one widely used.
Otherwise, add tests for passing `""mixed""`
### Additional context
```python
$ grep -iIrn '""mixed""' pytorch_lightning
pytorch_lightning/plugins/training_type/sharded.py:62: is_fp16 = precision in (""mixed"", 16)
pytorch_lightning/plugins/training_type/fully_sharded.py:135: mixed_precision=precision == ""mixed"",
pytorch_lightning/plugins/training_type/ipu.py:42: if self.precision in (""mixed"", 16):
pytorch_lightning/plugins/training_type/deepspeed.py:405: dtype = torch.float16 if self.precision in (16, ""mixed"") else torch.float32
pytorch_lightning/plugins/training_type/deepspeed.py:473: dtype = torch.float16 if self.precision in (16, ""mixed"") else torch.float32
pytorch_lightning/plugins/training_type/deepspeed.py:602: if precision in (16, ""mixed""):
pytorch_lightning/plugins/precision/mixed.py:26: precision: Union[str, int] = ""mixed""
pytorch_lightning/plugins/precision/fully_sharded_native_amp.py:26: precision = ""mixed""
```
```python
$ grep -iIrn '""mixed""' tests
tests/plugins/test_deepspeed_plugin.py:153:@pytest.mark.parametrize(""precision"", [16, ""mixed""])
```
```python
$ grep -Irn 'mixed' docs | grep 'precision='
# no mention in the docs!
```
______________________________________________________________________
#### If you enjoy Lightning, check out our other projects! ⚡
- [**Metrics**](https://github.com/PyTorchLightning/metrics): Machine learning metrics for distributed, scalable PyTorch applications.
- [**Flash**](https://github.com/PyTorchLightning/lightning-flash): The fastest way to get a Lightning baseline! A collection of tasks for fast prototyping, baselining, finetuning and solving problems with deep learning
- [**Bolts**](https://github.com/PyTorchLightning/lightning-bolts): Pretrained SOTA Deep Learning models, callbacks and more for research and production with PyTorch Lightning and PyTorch
- [**Lightning Transformers**](https://github.com/PyTorchLightning/lightning-transformers): Flexible interface for high performance research using SOTA Transformers leveraging Pytorch Lightning, Transformers, and Hydra.
cc @justusschock @awaelchli @akihironitta @rohitgr7 @tchaton @borda @carmocca",0,is precision mixed redundant proposed refactoring or deprecation does precision mixed act differently to precision in any way i understand that mixed is more correct as bit precision can still run some computations in bit motivation in i noticed we did not even have a precisiontype for mixed there s a single test in the codebase passing the mixed value and no mention at all of this value in the docs pitch have one value to set this whether it is or mixed most likely since its the one widely used otherwise add tests for passing mixed additional context python grep iirn mixed pytorch lightning pytorch lightning plugins training type sharded py is precision in mixed pytorch lightning plugins training type fully sharded py mixed precision precision mixed pytorch lightning plugins training type ipu py if self precision in mixed pytorch lightning plugins training type deepspeed py dtype torch if self precision in mixed else torch pytorch lightning plugins training type deepspeed py dtype torch if self precision in mixed else torch pytorch lightning plugins training type deepspeed py if precision in mixed pytorch lightning plugins precision mixed py precision union mixed pytorch lightning plugins precision fully sharded native amp py precision mixed python grep iirn mixed tests tests plugins test deepspeed plugin py pytest mark parametrize precision python grep irn mixed docs grep precision no mention in the docs if you enjoy lightning check out our other projects ⚡ machine learning metrics for distributed scalable pytorch applications the fastest way to get a lightning baseline a collection of tasks for fast prototyping baselining finetuning and solving problems with deep learning pretrained sota deep learning models callbacks and more for research and production with pytorch lightning and pytorch flexible interface for high performance research using sota transformers leveraging pytorch lightning transformers and hydra cc justusschock awaelchli akihironitta tchaton borda carmocca,0
1031,9201757206.0,IssuesEvent,2019-03-07 20:30:06,home-assistant/home-assistant,https://api.github.com/repos/home-assistant/home-assistant,closed,Home assistant interpreting non latin aliases in automations.yaml as non unique,component: automation waiting-for-reply,"**Home Assistant release with the issue:**
0.77.3
**Component/platform:**
automations.yaml
**Description of problem:**
automations.yaml with 2 or more automations like this:
```
- id: state_door_closed
alias: ""Сообщить о закрытии двери""
trigger:
- entity_id: binary_sensor.home_door
platform: state
to: 'off'
action:
- data:
message: ""Дверь закрыта""
service: notify.state
- id: state_door_opened
alias: ""Сообщить об открытии двери""
trigger:
- entity_id: binary_sensor.home_door
platform: state
to: 'on'
action:
- data:
message: ""Дверь открыта""
service: notify.state
```
cause errors on Home Assistant start: ""Entity id already exists: automation._____"" because code using only latin symbols (in that case - spaces) and thinks that aliases same.
And it's not only one problem - on Home Assistant start that automations not loaded at all.
**But if we made manual automations reload - they appear in list - and works - very strange (so work is possible!)**
So - we give id in english, alias - as we want to see in lists. And got errors. May be rename alias in friendly_name and works with it values like with friendly name, and unique name take from id? Automation Editor generates unique ids for example and aliases can be same (and fully latin) too...
So problem not in non latin symbols.",1.0,"Home assistant interpreting non latin aliases in automations.yaml as non unique - **Home Assistant release with the issue:**
0.77.3
**Component/platform:**
automations.yaml
**Description of problem:**
automations.yaml with 2 or more automations like this:
```
- id: state_door_closed
alias: ""Сообщить о закрытии двери""
trigger:
- entity_id: binary_sensor.home_door
platform: state
to: 'off'
action:
- data:
message: ""Дверь закрыта""
service: notify.state
- id: state_door_opened
alias: ""Сообщить об открытии двери""
trigger:
- entity_id: binary_sensor.home_door
platform: state
to: 'on'
action:
- data:
message: ""Дверь открыта""
service: notify.state
```
cause errors on Home Assistant start: ""Entity id already exists: automation._____"" because code using only latin symbols (in that case - spaces) and thinks that aliases same.
And it's not only one problem - on Home Assistant start that automations not loaded at all.
**But if we made manual automations reload - they appear in list - and works - very strange (so work is possible!)**
So - we give id in english, alias - as we want to see in lists. And got errors. May be rename alias in friendly_name and works with it values like with friendly name, and unique name take from id? Automation Editor generates unique ids for example and aliases can be same (and fully latin) too...
So problem not in non latin symbols.",1,home assistant interpreting non latin aliases in automations yaml as non unique home assistant release with the issue component platform automations yaml description of problem automations yaml with or more automations like this id state door closed alias сообщить о закрытии двери trigger entity id binary sensor home door platform state to off action data message дверь закрыта service notify state id state door opened alias сообщить об открытии двери trigger entity id binary sensor home door platform state to on action data message дверь открыта service notify state cause errors on home assistant start entity id already exists automation because code using only latin symbols in that case spaces and thinks that aliases same and it s not only one problem on home assistant start that automations not loaded at all but if we made manual automations reload they appear in list and works very strange so work is possible so we give id in english alias as we want to see in lists and got errors may be rename alias in friendly name and works with it values like with friendly name and unique name take from id automation editor generates unique ids for example and aliases can be same and fully latin too so problem not in non latin symbols ,1
102249,16550524748.0,IssuesEvent,2021-05-28 08:03:34,Vento-Nuenenen/inowo,https://api.github.com/repos/Vento-Nuenenen/inowo,opened,CVE-2021-32640 (Medium) detected in ws-7.4.5.tgz,security vulnerability,"## CVE-2021-32640 - Medium Severity Vulnerability
Vulnerable Library - ws-7.4.5.tgz
Simple to use, blazing fast and thoroughly tested websocket client and server for Node.js
ws is an open source WebSocket client and server library for Node.js. A specially crafted value of the `Sec-Websocket-Protocol` header can be used to significantly slow down a ws server. The vulnerability has been fixed in ws@7.4.6 (https://github.com/websockets/ws/commit/00c425ec77993773d823f018f64a5c44e17023ff). In vulnerable versions of ws, the issue can be mitigated by reducing the maximum allowed length of the request headers using the [`--max-http-header-size=size`](https://nodejs.org/api/cli.html#cli_max_http_header_size_size) and/or the [`maxHeaderSize`](https://nodejs.org/api/http.html#http_http_createserver_options_requestlistener) options.
***
Step up your Open Source Security Game with WhiteSource [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)",True,"CVE-2021-32640 (Medium) detected in ws-7.4.5.tgz - ## CVE-2021-32640 - Medium Severity Vulnerability
Vulnerable Library - ws-7.4.5.tgz
Simple to use, blazing fast and thoroughly tested websocket client and server for Node.js
ws is an open source WebSocket client and server library for Node.js. A specially crafted value of the `Sec-Websocket-Protocol` header can be used to significantly slow down a ws server. The vulnerability has been fixed in ws@7.4.6 (https://github.com/websockets/ws/commit/00c425ec77993773d823f018f64a5c44e17023ff). In vulnerable versions of ws, the issue can be mitigated by reducing the maximum allowed length of the request headers using the [`--max-http-header-size=size`](https://nodejs.org/api/cli.html#cli_max_http_header_size_size) and/or the [`maxHeaderSize`](https://nodejs.org/api/http.html#http_http_createserver_options_requestlistener) options.
***
Step up your Open Source Security Game with WhiteSource [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)",0,cve medium detected in ws tgz cve medium severity vulnerability vulnerable library ws tgz simple to use blazing fast and thoroughly tested websocket client and server for node js library home page a href path to dependency file inowo package json path to vulnerable library inowo node modules ws package json dependency hierarchy laravel mix tgz root library webpack dev server beta tgz x ws tgz vulnerable library found in head commit a href found in base branch master vulnerability details ws is an open source websocket client and server library for node js a specially crafted value of the sec websocket protocol header can be used to significantly slow down a ws server the vulnerability has been fixed in ws in vulnerable versions of ws the issue can be mitigated by reducing the maximum allowed length of the request headers using the and or the options publish date url a href cvss score details base score metrics exploitability metrics attack vector network attack complexity low privileges required none user interaction none scope unchanged impact metrics confidentiality impact none integrity impact none availability impact low for more information on scores click a href suggested fix type upgrade version origin a href release date fix resolution ws step up your open source security game with whitesource ,0
648240,21179996761.0,IssuesEvent,2022-04-08 06:55:25,AY2122S2-CS2103T-T13-4/tp,https://api.github.com/repos/AY2122S2-CS2103T-T13-4/tp,closed,GUI Undo/Redo bug,type.Bug priority.MEDIUM,"To replicate, here are the steps:
1. Type `add`
2. Close `AddWindow`
3. Type `undo`
No commands should be undone, and if any action should be taken, it would be to reopen `AddWindow` (but I think this one would be complicated)
Alternatively, if you were to add a new `Person`..
1. Type `add`
2. Fill in details of new `Person`
3. Submit
4. Type undo. It gives the correct behaviour of undoing the adding of a `Person`
5. Type undo again. It gives the wrong behaviour and reports `Undo Success!` when correct one should report `No more commands to undo!`, since typing `add` is just a intermediary to open up `AddWindow` rather than a command that modifies.
",1.0,"GUI Undo/Redo bug - To replicate, here are the steps:
1. Type `add`
2. Close `AddWindow`
3. Type `undo`
No commands should be undone, and if any action should be taken, it would be to reopen `AddWindow` (but I think this one would be complicated)
Alternatively, if you were to add a new `Person`..
1. Type `add`
2. Fill in details of new `Person`
3. Submit
4. Type undo. It gives the correct behaviour of undoing the adding of a `Person`
5. Type undo again. It gives the wrong behaviour and reports `Undo Success!` when correct one should report `No more commands to undo!`, since typing `add` is just a intermediary to open up `AddWindow` rather than a command that modifies.
",0,gui undo redo bug to replicate here are the steps type add close addwindow type undo no commands should be undone and if any action should be taken it would be to reopen addwindow but i think this one would be complicated alternatively if you were to add a new person type add fill in details of new person submit type undo it gives the correct behaviour of undoing the adding of a person type undo again it gives the wrong behaviour and reports undo success when correct one should report no more commands to undo since typing add is just a intermediary to open up addwindow rather than a command that modifies ,0
1099,9461136796.0,IssuesEvent,2019-04-17 12:49:03,nf-core/tools,https://api.github.com/repos/nf-core/tools,opened,Automate Releases,automation question template,"Since we already insist on having people work on `dev` branches & `master` branches only incorporating stable code, we could also produce a way of making automated releases too:
https://github.com/semantic-release/semantic-release
This would only require:
- Setting the `master` branch protected for everyone, except for PRs coming from `dev`
- Forcing everyone to NEVER merge to `master` except for releases that have been tested on `dev` (which we anyways do)
Could then configure the method above to make a release whenever coming from `master` and something has been changed. If people then also use the `CHANGELOG.md`, it would automatically do proper and nice releases :-)
",1.0,"Automate Releases - Since we already insist on having people work on `dev` branches & `master` branches only incorporating stable code, we could also produce a way of making automated releases too:
https://github.com/semantic-release/semantic-release
This would only require:
- Setting the `master` branch protected for everyone, except for PRs coming from `dev`
- Forcing everyone to NEVER merge to `master` except for releases that have been tested on `dev` (which we anyways do)
Could then configure the method above to make a release whenever coming from `master` and something has been changed. If people then also use the `CHANGELOG.md`, it would automatically do proper and nice releases :-)
",1,automate releases since we already insist on having people work on dev branches master branches only incorporating stable code we could also produce a way of making automated releases too this would only require setting the master branch protected for everyone except for prs coming from dev forcing everyone to never merge to master except for releases that have been tested on dev which we anyways do could then configure the method above to make a release whenever coming from master and something has been changed if people then also use the changelog md it would automatically do proper and nice releases ,1
1166,9607666582.0,IssuesEvent,2019-05-11 21:03:54,riemers/home-assistant-config,https://api.github.com/repos/riemers/home-assistant-config,opened,Make a movie mode,Automation Todo,"- Dim lights (if at night time)
- Only turn on movie mode is receiver is turned on
- Turn on subwoofer (add a plug in between, safes usage)
- Turn sun screen down (day and night, awefull pedestrian stoplight bluring my vision)
",1.0,"Make a movie mode - - Dim lights (if at night time)
- Only turn on movie mode is receiver is turned on
- Turn on subwoofer (add a plug in between, safes usage)
- Turn sun screen down (day and night, awefull pedestrian stoplight bluring my vision)
",1,make a movie mode dim lights if at night time only turn on movie mode is receiver is turned on turn on subwoofer add a plug in between safes usage turn sun screen down day and night awefull pedestrian stoplight bluring my vision ,1
107080,16751510521.0,IssuesEvent,2021-06-12 01:03:36,Tim-sandbox/EZBuggyPrioritize,https://api.github.com/repos/Tim-sandbox/EZBuggyPrioritize,opened,CVE-2018-3258 (High) detected in mysql-connector-java-5.1.25.jar,security vulnerability,"## CVE-2018-3258 - High Severity Vulnerability
Vulnerable Library - mysql-connector-java-5.1.25.jar
Path to dependency file: EZBuggyPrioritize/pom.xml
Path to vulnerable library: EZBuggyPrioritize/target/easybuggy-1-SNAPSHOT/WEB-INF/lib/mysql-connector-java-5.1.25.jar,canner/.m2/repository/mysql/mysql-connector-java/5.1.25/mysql-connector-java-5.1.25.jar
Vulnerability in the MySQL Connectors component of Oracle MySQL (subcomponent: Connector/J). Supported versions that are affected are 8.0.12 and prior. Easily exploitable vulnerability allows low privileged attacker with network access via multiple protocols to compromise MySQL Connectors. Successful attacks of this vulnerability can result in takeover of MySQL Connectors. CVSS 3.0 Base Score 8.8 (Confidentiality, Integrity and Availability impacts). CVSS Vector: (CVSS:3.0/AV:N/AC:L/PR:L/UI:N/S:U/C:H/I:H/A:H).
***
:rescue_worker_helmet: Automatic Remediation is available for this issue
",True,"CVE-2018-3258 (High) detected in mysql-connector-java-5.1.25.jar - ## CVE-2018-3258 - High Severity Vulnerability
Vulnerable Library - mysql-connector-java-5.1.25.jar
Path to dependency file: EZBuggyPrioritize/pom.xml
Path to vulnerable library: EZBuggyPrioritize/target/easybuggy-1-SNAPSHOT/WEB-INF/lib/mysql-connector-java-5.1.25.jar,canner/.m2/repository/mysql/mysql-connector-java/5.1.25/mysql-connector-java-5.1.25.jar
Vulnerability in the MySQL Connectors component of Oracle MySQL (subcomponent: Connector/J). Supported versions that are affected are 8.0.12 and prior. Easily exploitable vulnerability allows low privileged attacker with network access via multiple protocols to compromise MySQL Connectors. Successful attacks of this vulnerability can result in takeover of MySQL Connectors. CVSS 3.0 Base Score 8.8 (Confidentiality, Integrity and Availability impacts). CVSS Vector: (CVSS:3.0/AV:N/AC:L/PR:L/UI:N/S:U/C:H/I:H/A:H).
***
:rescue_worker_helmet: Automatic Remediation is available for this issue
",0,cve high detected in mysql connector java jar cve high severity vulnerability vulnerable library mysql connector java jar mysql jdbc type driver library home page a href path to dependency file ezbuggyprioritize pom xml path to vulnerable library ezbuggyprioritize target easybuggy snapshot web inf lib mysql connector java jar canner repository mysql mysql connector java mysql connector java jar dependency hierarchy x mysql connector java jar vulnerable library found in base branch main vulnerability details vulnerability in the mysql connectors component of oracle mysql subcomponent connector j supported versions that are affected are and prior easily exploitable vulnerability allows low privileged attacker with network access via multiple protocols to compromise mysql connectors successful attacks of this vulnerability can result in takeover of mysql connectors cvss base score confidentiality integrity and availability impacts cvss vector cvss av n ac l pr l ui n s u c h i h a h publish date url a href cvss score details base score metrics exploitability metrics attack vector network attack complexity low privileges required low user interaction none scope unchanged impact metrics confidentiality impact high integrity impact high availability impact high for more information on scores click a href suggested fix type upgrade version origin a href release date fix resolution mysql mysql connector java rescue worker helmet automatic remediation is available for this issue isopenpronvulnerability true ispackagebased true isdefaultbranch true packages istransitivedependency false dependencytree mysql mysql connector java isminimumfixversionavailable true minimumfixversion mysql mysql connector java basebranches vulnerabilityidentifier cve vulnerabilitydetails vulnerability in the mysql connectors component of oracle mysql subcomponent connector j supported versions that are affected are and prior easily exploitable vulnerability allows low privileged attacker with network access via multiple protocols to compromise mysql connectors successful attacks of this vulnerability can result in takeover of mysql connectors cvss base score confidentiality integrity and availability impacts cvss vector cvss av n ac l pr l ui n s u c h i h a h vulnerabilityurl ,0
4683,17200692041.0,IssuesEvent,2021-07-17 06:44:10,home-assistant/core,https://api.github.com/repos/home-assistant/core,closed,Time Condition problem with both After and Before fields,integration: automation stale,"### The problem
I have an automation to switch on/off a plug, which is triggered from two time-only input_datetime helpers (one for each of on/off) and a device-tracker presence boolean grouped into group.inhabitants. Then there are two choose actions, one for on one for off, determined by various conditions.
The triggers
```
trigger:
- platform: state
entity_id: group.inhabitants
- platform: time
at: input_datetime.plug_coffee_on
- platform: time
at: input_datetime.plug_coffee_off
```
The condition for on:
```
conditions:
- condition: and
conditions:
- condition: time
after: input_datetime.plug_coffee_on
before: input_datetime.plug_coffee_off
- condition: state
entity_id: group.inhabitants
state: 'on'
```
The problem is that the on choose branch is executed when the OFF time helper is triggered, despite its being after the OFF time. It seems that the time condition in this case ignores the before statement.
You can see this to be the case in the UI automation editor. There, the before condition is in fact empty. When I try to fix it up, by entering the proper helper in the UI and saving it, it is again empty when I reopen the automation editor (see the screenshot). Nothing changes in the YAML.

I've pasted the whole automation below, in case it is something strange in some other part of it that is causing this behaviour.
### What is version of Home Assistant Core has the issue?
2021.4.3
### What was the last working version of Home Assistant Core?
?
### What type of installation are you running?
Home Assistant OS
### Integration causing the issue
Automation
### Link to integration documentation on our website
_No response_
### Example YAML snippet
```yaml
alias: 'Plug: coffee machine'
description: Switches on coffee machine if during day and someone home
trigger:
- platform: state
entity_id: group.inhabitants
- platform: time
at: input_datetime.plug_coffee_on
- platform: time
at: input_datetime.plug_coffee_off
action:
- choose:
- alias: Turn it on
conditions:
- condition: and
conditions:
- condition: time
after: input_datetime.plug_coffee_on
before: input_datetime.plug_coffee_off
- condition: state
entity_id: group.inhabitants
state: 'on'
sequence:
- service: switch.turn_on
target:
entity_id: switch.plug_salon_coffee
- service: notify.persistent_notification
data:
title: Coffee switched on
message: Indeed
- choose:
- alias: Turn it off
conditions:
- condition: or
conditions:
- condition: not
conditions:
- condition: time
after: input_datetime.plug_coffee_on
before: input_datetime.plug_coffee_off
- condition: state
entity_id: group.inhabitants
state: 'off'
sequence:
- service: switch.turn_off
target:
entity_id: switch.plug_salon_coffee
- service: notify.persistent_notification
data:
title: Coffee switched off
message: Indeed
mode: single
```
### Anything in the logs that might be useful for us?
Attaching a copy of the automation's trace. Renamed to .txt so I can upload:
[trace automation.plug_coffee_machine 2021-04-12T14_00_00.003971+00_00.txt](https://github.com/home-assistant/core/files/6297413/trace.automation.plug_coffee_machine.2021-04-12T14_00_00.003971%2B00_00.txt)
",1.0,"Time Condition problem with both After and Before fields - ### The problem
I have an automation to switch on/off a plug, which is triggered from two time-only input_datetime helpers (one for each of on/off) and a device-tracker presence boolean grouped into group.inhabitants. Then there are two choose actions, one for on one for off, determined by various conditions.
The triggers
```
trigger:
- platform: state
entity_id: group.inhabitants
- platform: time
at: input_datetime.plug_coffee_on
- platform: time
at: input_datetime.plug_coffee_off
```
The condition for on:
```
conditions:
- condition: and
conditions:
- condition: time
after: input_datetime.plug_coffee_on
before: input_datetime.plug_coffee_off
- condition: state
entity_id: group.inhabitants
state: 'on'
```
The problem is that the on choose branch is executed when the OFF time helper is triggered, despite its being after the OFF time. It seems that the time condition in this case ignores the before statement.
You can see this to be the case in the UI automation editor. There, the before condition is in fact empty. When I try to fix it up, by entering the proper helper in the UI and saving it, it is again empty when I reopen the automation editor (see the screenshot). Nothing changes in the YAML.

I've pasted the whole automation below, in case it is something strange in some other part of it that is causing this behaviour.
### What is version of Home Assistant Core has the issue?
2021.4.3
### What was the last working version of Home Assistant Core?
?
### What type of installation are you running?
Home Assistant OS
### Integration causing the issue
Automation
### Link to integration documentation on our website
_No response_
### Example YAML snippet
```yaml
alias: 'Plug: coffee machine'
description: Switches on coffee machine if during day and someone home
trigger:
- platform: state
entity_id: group.inhabitants
- platform: time
at: input_datetime.plug_coffee_on
- platform: time
at: input_datetime.plug_coffee_off
action:
- choose:
- alias: Turn it on
conditions:
- condition: and
conditions:
- condition: time
after: input_datetime.plug_coffee_on
before: input_datetime.plug_coffee_off
- condition: state
entity_id: group.inhabitants
state: 'on'
sequence:
- service: switch.turn_on
target:
entity_id: switch.plug_salon_coffee
- service: notify.persistent_notification
data:
title: Coffee switched on
message: Indeed
- choose:
- alias: Turn it off
conditions:
- condition: or
conditions:
- condition: not
conditions:
- condition: time
after: input_datetime.plug_coffee_on
before: input_datetime.plug_coffee_off
- condition: state
entity_id: group.inhabitants
state: 'off'
sequence:
- service: switch.turn_off
target:
entity_id: switch.plug_salon_coffee
- service: notify.persistent_notification
data:
title: Coffee switched off
message: Indeed
mode: single
```
### Anything in the logs that might be useful for us?
Attaching a copy of the automation's trace. Renamed to .txt so I can upload:
[trace automation.plug_coffee_machine 2021-04-12T14_00_00.003971+00_00.txt](https://github.com/home-assistant/core/files/6297413/trace.automation.plug_coffee_machine.2021-04-12T14_00_00.003971%2B00_00.txt)
",1,time condition problem with both after and before fields the problem i have an automation to switch on off a plug which is triggered from two time only input datetime helpers one for each of on off and a device tracker presence boolean grouped into group inhabitants then there are two choose actions one for on one for off determined by various conditions the triggers trigger platform state entity id group inhabitants platform time at input datetime plug coffee on platform time at input datetime plug coffee off the condition for on conditions condition and conditions condition time after input datetime plug coffee on before input datetime plug coffee off condition state entity id group inhabitants state on the problem is that the on choose branch is executed when the off time helper is triggered despite its being after the off time it seems that the time condition in this case ignores the before statement you can see this to be the case in the ui automation editor there the before condition is in fact empty when i try to fix it up by entering the proper helper in the ui and saving it it is again empty when i reopen the automation editor see the screenshot nothing changes in the yaml i ve pasted the whole automation below in case it is something strange in some other part of it that is causing this behaviour what is version of home assistant core has the issue what was the last working version of home assistant core what type of installation are you running home assistant os integration causing the issue automation link to integration documentation on our website no response example yaml snippet yaml alias plug coffee machine description switches on coffee machine if during day and someone home trigger platform state entity id group inhabitants platform time at input datetime plug coffee on platform time at input datetime plug coffee off action choose alias turn it on conditions condition and conditions condition time after input datetime plug coffee on before input datetime plug coffee off condition state entity id group inhabitants state on sequence service switch turn on target entity id switch plug salon coffee service notify persistent notification data title coffee switched on message indeed choose alias turn it off conditions condition or conditions condition not conditions condition time after input datetime plug coffee on before input datetime plug coffee off condition state entity id group inhabitants state off sequence service switch turn off target entity id switch plug salon coffee service notify persistent notification data title coffee switched off message indeed mode single anything in the logs that might be useful for us attaching a copy of the automation s trace renamed to txt so i can upload ,1
671273,22752900380.0,IssuesEvent,2022-07-07 14:23:24,opensquare-network/paid-qa,https://api.github.com/repos/opensquare-network/paid-qa,closed,"page `/#/new` error, balance",bug priority:medium,"reproduce, reload page `/#/new`, related file `site/utils/hooks.js`
",1.0,"page `/#/new` error, balance - reproduce, reload page `/#/new`, related file `site/utils/hooks.js`
",0,page new error balance reproduce reload page new related file site utils hooks js img width alt image src ,0
1529,10291354253.0,IssuesEvent,2019-08-27 14:19:00,mozilla-mobile/android-components,https://api.github.com/repos/mozilla-mobile/android-components,opened,Auto-Land MickeyMoz PRs,🤖 automation,"Initially MickeyMoz only created PRs so that we can verify them and see how it works out. Nowadays a bunch of them are pretty stable and we could consider auto-merging them (Docs, Public Suffix List, GeckoView updates).
With bors-ng now (#1200) I think we could do something like:
* MickeyMoz openes PR
* MozLando approves PR (would need to be a code owner unless we lift that restriction since bors does not respect codeowners yet anyways)
* MozLando comments with ""bors r+""
* bors-ng tests and lands the patch",1.0,"Auto-Land MickeyMoz PRs - Initially MickeyMoz only created PRs so that we can verify them and see how it works out. Nowadays a bunch of them are pretty stable and we could consider auto-merging them (Docs, Public Suffix List, GeckoView updates).
With bors-ng now (#1200) I think we could do something like:
* MickeyMoz openes PR
* MozLando approves PR (would need to be a code owner unless we lift that restriction since bors does not respect codeowners yet anyways)
* MozLando comments with ""bors r+""
* bors-ng tests and lands the patch",1,auto land mickeymoz prs initially mickeymoz only created prs so that we can verify them and see how it works out nowadays a bunch of them are pretty stable and we could consider auto merging them docs public suffix list geckoview updates with bors ng now i think we could do something like mickeymoz openes pr mozlando approves pr would need to be a code owner unless we lift that restriction since bors does not respect codeowners yet anyways mozlando comments with bors r bors ng tests and lands the patch,1
33009,12157618593.0,IssuesEvent,2020-04-25 23:00:11,jmservera/node-red-azure-webapp,https://api.github.com/repos/jmservera/node-red-azure-webapp,opened,"WS-2016-0044 (Medium) detected in swagger-ui-v2.1.4, swagger-ui-2.1.4.tgz",security vulnerability,"## WS-2016-0044 - Medium Severity Vulnerability
Vulnerable Libraries - swagger-ui-2.1.4.tgz
swagger-ui-2.1.4.tgz
Swagger UI is a dependency-free collection of HTML, JavaScript, and CSS assets that dynamically generate beautiful documentation from a Swagger-compliant API
***
Step up your Open Source Security Game with WhiteSource [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)",True,"WS-2016-0044 (Medium) detected in swagger-ui-v2.1.4, swagger-ui-2.1.4.tgz - ## WS-2016-0044 - Medium Severity Vulnerability
Vulnerable Libraries - swagger-ui-2.1.4.tgz
swagger-ui-2.1.4.tgz
Swagger UI is a dependency-free collection of HTML, JavaScript, and CSS assets that dynamically generate beautiful documentation from a Swagger-compliant API
/cc @cockroachdb/kv-triage
[This test on roachdash](https://roachdash.crdb.dev/?filter=status:open%20t:.*kv0/enc=false/nodes=1/size=64kb/conc=4096.*&sort=title+created&display=lastcommented+project) | [Improve this report!](https://github.com/cockroachdb/cockroach/tree/master/pkg/cmd/internal/issues)
Jira issue: CRDB-13816
Epic CRDB-16238",0,roachtest enc false nodes size conc failed roachtest enc false nodes size conc with on release write write write write write write write write write write write wraps command problem wraps node command with error workload run kv init histograms perf stats json concurrency splits duration read percent min block bytes max block bytes pgurl wraps exit status error types withstack withstack errutil withprefix cluster withcommanddetails errors cmd hintdetail withdetail exec exiterror monitor go kv go kv go test runner go monitor failure monitor task failed t fatal was called attached stack trace stack trace main monitorimpl waite main pkg cmd roachtest monitor go main monitorimpl wait main pkg cmd roachtest monitor go github com cockroachdb cockroach pkg cmd roachtest tests registerkv github com cockroachdb cockroach pkg cmd roachtest tests kv go github com cockroachdb cockroach pkg cmd roachtest tests registerkv github com cockroachdb cockroach pkg cmd roachtest tests kv go main testrunner runtest main pkg cmd roachtest test runner go wraps monitor failure wraps attached stack trace stack trace main monitorimpl wait main pkg cmd roachtest monitor go wraps monitor task failed wraps attached stack trace stack trace main init main pkg cmd roachtest monitor go runtime doinit goroot src runtime proc go runtime main goroot src runtime proc go runtime goexit goroot src runtime asm s wraps t fatal was called error types withstack withstack errutil withprefix withstack withstack errutil withprefix withstack withstack errutil leaferror help see see same failure on other branches roachtest enc false nodes size conc failed roachtest enc false nodes size conc failed cc cockroachdb kv triage jira issue crdb epic crdb ,0
1421,10091903305.0,IssuesEvent,2019-07-26 15:18:16,MicrosoftDocs/azure-docs,https://api.github.com/repos/MicrosoftDocs/azure-docs,closed,"Typo for ""set stop healthservice""",Pri2 automation/svc change-inventory-management/subsvc cxp doc-enhancement triaged,"It should be ""net stop healthservice"".
---
#### Document Details
⚠ *Do not edit this section. It is required for docs.microsoft.com ➟ GitHub issue linking.*
* ID: 44e919d1-bf8c-38af-29a5-63a2e679850d
* Version Independent ID: 03a4bc5a-6666-bc24-f315-4324278e50ae
* Content: [Troubleshooting issues with Azure Change Tracking](https://docs.microsoft.com/en-us/azure/automation/troubleshoot/change-tracking#feedback)
* Content Source: [articles/automation/troubleshoot/change-tracking.md](https://github.com/Microsoft/azure-docs/blob/master/articles/automation/troubleshoot/change-tracking.md)
* Service: **automation**
* Sub-service: **change-inventory-management**
* GitHub Login: @bobbytreed
* Microsoft Alias: **robreed**",1.0,"Typo for ""set stop healthservice"" - It should be ""net stop healthservice"".
---
#### Document Details
⚠ *Do not edit this section. It is required for docs.microsoft.com ➟ GitHub issue linking.*
* ID: 44e919d1-bf8c-38af-29a5-63a2e679850d
* Version Independent ID: 03a4bc5a-6666-bc24-f315-4324278e50ae
* Content: [Troubleshooting issues with Azure Change Tracking](https://docs.microsoft.com/en-us/azure/automation/troubleshoot/change-tracking#feedback)
* Content Source: [articles/automation/troubleshoot/change-tracking.md](https://github.com/Microsoft/azure-docs/blob/master/articles/automation/troubleshoot/change-tracking.md)
* Service: **automation**
* Sub-service: **change-inventory-management**
* GitHub Login: @bobbytreed
* Microsoft Alias: **robreed**",1,typo for set stop healthservice it should be net stop healthservice document details ⚠ do not edit this section it is required for docs microsoft com ➟ github issue linking id version independent id content content source service automation sub service change inventory management github login bobbytreed microsoft alias robreed ,1
3047,13034356824.0,IssuesEvent,2020-07-28 08:34:30,DimensionDev/Maskbook,https://api.github.com/repos/DimensionDev/Maskbook,closed,[Bug] CI tag conflict when fetching new commits,Dev: CI Automation Type: Bug,"# Bug Report
## Environment
### System
- OS:
- OS Version:
### Browser
- Browser:
- Browser Version:
### Maskbook
- Maskbook Version:
- Installation: /* May be ""Store"", ""ZIP"", or ""Self-Complied"" */
- Build Commit: Optionally attach a Commit ID, if it is from an pre-release branch head
## Bug Info
### Expected Behavior
/* Write the expected behavior here. */
### Actual Behavior
https://dimension.chat/group/maskbook-qa?msg=KQd9E5MokZXrDAtZM
### How To Reproduce
/* Specify how it may be produced here. */
",1.0,"[Bug] CI tag conflict when fetching new commits - # Bug Report
## Environment
### System
- OS:
- OS Version:
### Browser
- Browser:
- Browser Version:
### Maskbook
- Maskbook Version:
- Installation: /* May be ""Store"", ""ZIP"", or ""Self-Complied"" */
- Build Commit: Optionally attach a Commit ID, if it is from an pre-release branch head
## Bug Info
### Expected Behavior
/* Write the expected behavior here. */
### Actual Behavior
https://dimension.chat/group/maskbook-qa?msg=KQd9E5MokZXrDAtZM
### How To Reproduce
/* Specify how it may be produced here. */
",1, ci tag conflict when fetching new commits bug report environment system os os version browser browser browser version maskbook maskbook version installation may be store zip or self complied build commit optionally attach a commit id if it is from an pre release branch head bug info expected behavior write the expected behavior here actual behavior how to reproduce specify how it may be produced here ,1
4419,16508307774.0,IssuesEvent,2021-05-25 22:36:50,longhorn/longhorn,https://api.github.com/repos/longhorn/longhorn,closed,[BUG] Priority class not getting set for recurring jobs,area/manager backport-needed kind/bug require/automation-e2e severity/4,"**Describe the bug**
Priority set using Longhorn UI is not reflecting for recurring job.
**To Reproduce**
Steps to reproduce the behavior:
1. Deploy Longhorn master on a cluster.
2. Set priority class in the setting page of Longhorn UI.
3. Create a volume and configure recurring snapshot/backup job for it.
4. Check the yaml for recurring job, there is no priority class.
**Environment:**
- Longhorn version: Longhorn -master `04/06/2021`
- Installation method (e.g. Rancher Catalog App/Helm/Kubectl): Kubectl
- Kubernetes distro (e.g. RKE/K3s/EKS/OpenShift) and version: RKE - K8s v1.20.4
- Number of management node in the cluster: 1
- Number of worker node in the cluster: 4
",1.0,"[BUG] Priority class not getting set for recurring jobs - **Describe the bug**
Priority set using Longhorn UI is not reflecting for recurring job.
**To Reproduce**
Steps to reproduce the behavior:
1. Deploy Longhorn master on a cluster.
2. Set priority class in the setting page of Longhorn UI.
3. Create a volume and configure recurring snapshot/backup job for it.
4. Check the yaml for recurring job, there is no priority class.
**Environment:**
- Longhorn version: Longhorn -master `04/06/2021`
- Installation method (e.g. Rancher Catalog App/Helm/Kubectl): Kubectl
- Kubernetes distro (e.g. RKE/K3s/EKS/OpenShift) and version: RKE - K8s v1.20.4
- Number of management node in the cluster: 1
- Number of worker node in the cluster: 4
",1, priority class not getting set for recurring jobs describe the bug priority set using longhorn ui is not reflecting for recurring job to reproduce steps to reproduce the behavior deploy longhorn master on a cluster set priority class in the setting page of longhorn ui create a volume and configure recurring snapshot backup job for it check the yaml for recurring job there is no priority class environment longhorn version longhorn master installation method e g rancher catalog app helm kubectl kubectl kubernetes distro e g rke eks openshift and version rke number of management node in the cluster number of worker node in the cluster ,1
470348,13536123247.0,IssuesEvent,2020-09-16 08:35:48,DigitalExcellence/dex-backend,https://api.github.com/repos/DigitalExcellence/dex-backend,closed,System.Net.Mail.SmtpException: The SMTP server requires a secure connection or the client was not authenticated. The server resp...,priority,"Sentry Issue: [IDENTITYSERVER-E](https://sentry.io/organizations/digital-excellence/issues/1870503441/?referrer=github_integration)
```
System.Net.Mail.SmtpException: The SMTP server requires a secure connection or the client was not authenticated. The server response was: 5.7.0 Authentication Required. Learn more at
File ""/app/IdentityServer/Quickstart/Account/ExternalController.cs"", line 245, in Callback
Module ""IdentityServer4.Hosting.IdentityServerMiddleware"", in Invoke
Module ""IdentityServer4.Hosting.MutualTlsTokenEndpointMiddleware"", in Invoke
Module ""IdentityServer4.Hosting.BaseUrlMiddleware"", in Invoke
...
(44 additional frame(s) were not displayed)
```",1.0,"System.Net.Mail.SmtpException: The SMTP server requires a secure connection or the client was not authenticated. The server resp... - Sentry Issue: [IDENTITYSERVER-E](https://sentry.io/organizations/digital-excellence/issues/1870503441/?referrer=github_integration)
```
System.Net.Mail.SmtpException: The SMTP server requires a secure connection or the client was not authenticated. The server response was: 5.7.0 Authentication Required. Learn more at
File ""/app/IdentityServer/Quickstart/Account/ExternalController.cs"", line 245, in Callback
Module ""IdentityServer4.Hosting.IdentityServerMiddleware"", in Invoke
Module ""IdentityServer4.Hosting.MutualTlsTokenEndpointMiddleware"", in Invoke
Module ""IdentityServer4.Hosting.BaseUrlMiddleware"", in Invoke
...
(44 additional frame(s) were not displayed)
```",0,system net mail smtpexception the smtp server requires a secure connection or the client was not authenticated the server resp sentry issue system net mail smtpexception the smtp server requires a secure connection or the client was not authenticated the server response was authentication required learn more at file app identityserver quickstart account externalcontroller cs line in callback module hosting identityservermiddleware in invoke module hosting mutualtlstokenendpointmiddleware in invoke module hosting baseurlmiddleware in invoke additional frame s were not displayed ,0
4965,18110085331.0,IssuesEvent,2021-09-23 01:50:36,theglus/Home-Assistant-Config,https://api.github.com/repos/theglus/Home-Assistant-Config,closed,Migrate Vacuum notifications from Telegram to In-App,automation Winston notifications,"# Requirements
- [x] Create [notify group](https://www.home-assistant.io/integrations/notify.group/).
- [x] Update [vacuum_clean.yaml](https://github.com/theglus/Home-Assistant-Config/blob/d257609aabfa13a63e46892deac6add52ce963c8/includes/automations/vacuum_clean.yaml).
- [x] Update [vacuum_docked.yaml](https://github.com/theglus/Home-Assistant-Config/blob/d257609aabfa13a63e46892deac6add52ce963c8/includes/automations/vacuum_docked.yaml).
- [x] Update [vacuum_done.yaml](https://github.com/theglus/Home-Assistant-Config/blob/d257609aabfa13a63e46892deac6add52ce963c8/includes/automations/vacuum_done.yaml).
# Resources
* [Notification Integration](https://www.home-assistant.io/integrations/notify/)
* [Notify Group](https://www.home-assistant.io/integrations/notify.group/)",1.0,"Migrate Vacuum notifications from Telegram to In-App - # Requirements
- [x] Create [notify group](https://www.home-assistant.io/integrations/notify.group/).
- [x] Update [vacuum_clean.yaml](https://github.com/theglus/Home-Assistant-Config/blob/d257609aabfa13a63e46892deac6add52ce963c8/includes/automations/vacuum_clean.yaml).
- [x] Update [vacuum_docked.yaml](https://github.com/theglus/Home-Assistant-Config/blob/d257609aabfa13a63e46892deac6add52ce963c8/includes/automations/vacuum_docked.yaml).
- [x] Update [vacuum_done.yaml](https://github.com/theglus/Home-Assistant-Config/blob/d257609aabfa13a63e46892deac6add52ce963c8/includes/automations/vacuum_done.yaml).
# Resources
* [Notification Integration](https://www.home-assistant.io/integrations/notify/)
* [Notify Group](https://www.home-assistant.io/integrations/notify.group/)",1,migrate vacuum notifications from telegram to in app requirements create update update update resources ,1
743,7976150936.0,IssuesEvent,2018-07-17 11:44:19,lubbertkramer/home_assistant_config,https://api.github.com/repos/lubbertkramer/home_assistant_config,opened,Auto dimming lights that are on,Automation Home Assistant Not now,"Maybe auto dim existing lights that are on throughout the day.
https://community.home-assistant.io/t/group-add-entities-interesting-concept-how-does-it-work/54967/6?u=ccostan",1.0,"Auto dimming lights that are on - Maybe auto dim existing lights that are on throughout the day.
https://community.home-assistant.io/t/group-add-entities-interesting-concept-how-does-it-work/54967/6?u=ccostan",1,auto dimming lights that are on maybe auto dim existing lights that are on throughout the day ,1
9151,27628793415.0,IssuesEvent,2023-03-10 09:15:28,camunda/camunda-bpm-platform,https://api.github.com/repos/camunda/camunda-bpm-platform,closed,"Fire historic task update events when task property ""last updated"" is changed",version:7.19.0 type:feature component:c7-automation-platform group:support,"### User Story (Required on creation)
As a developer, I want to keep my custom Tasklist system in sync, which relies on a custom history backend. For that, I need historic task update events fired whenever the ""last updated"" task property is changed.
### Functional Requirements (Required before implementation)
Fire the historic task update events in case task property changed, including ""last updated"".
### Technical Requirements (Required before implementation)
* fire the event for ""last update"" and consider future properties
* evaluate if ""last update"" brings value in the history for the user. if It doesn't, we only emit the event, ensure we don't populate information in the history for it
* preserve the events order
* look for other implications if we bring back the #update in #triggerUpdateEvent
### Limitations of Scope
### Hints
### Links
https://jira.camunda.com/browse/SUPPORT-15192
### Breakdown
- [x] https://github.com/camunda/camunda-bpm-platform/pull/3178
",1.0,"Fire historic task update events when task property ""last updated"" is changed - ### User Story (Required on creation)
As a developer, I want to keep my custom Tasklist system in sync, which relies on a custom history backend. For that, I need historic task update events fired whenever the ""last updated"" task property is changed.
### Functional Requirements (Required before implementation)
Fire the historic task update events in case task property changed, including ""last updated"".
### Technical Requirements (Required before implementation)
* fire the event for ""last update"" and consider future properties
* evaluate if ""last update"" brings value in the history for the user. if It doesn't, we only emit the event, ensure we don't populate information in the history for it
* preserve the events order
* look for other implications if we bring back the #update in #triggerUpdateEvent
### Limitations of Scope
### Hints
### Links
https://jira.camunda.com/browse/SUPPORT-15192
### Breakdown
- [x] https://github.com/camunda/camunda-bpm-platform/pull/3178
",1,fire historic task update events when task property last updated is changed user story required on creation as a developer i want to keep my custom tasklist system in sync which relies on a custom history backend for that i need historic task update events fired whenever the last updated task property is changed functional requirements required before implementation fire the historic task update events in case task property changed including last updated technical requirements required before implementation fire the event for last update and consider future properties evaluate if last update brings value in the history for the user if it doesn t we only emit the event ensure we don t populate information in the history for it preserve the events order look for other implications if we bring back the update in triggerupdateevent limitations of scope hints links breakdown ,1
78891,22496235458.0,IssuesEvent,2022-06-23 07:50:32,OpenModelica/OpenModelica,https://api.github.com/repos/OpenModelica/OpenModelica,closed,Windows installers fail SmartScreen checks,enhancement COMP/Build System,"When installing OMC on Windows, the SmartScreen filter identifies the OMC installer as suspicious software from unidentified authors, and requires to give explicit consent to perform a potentially dangerous installation.
This may be ok for hardened hackers that know about the OSMC, but it's not projecting an image of quality and dependability on the sofware, particularly for industrial and corporate use. Looking like potential malware is not a very good marketing strategy :)
I would recommend that from 1.13.0 we start signing the installer with a certificate, so that we avoid this kind of problems. More information on how to do this is found [here](https://blogs.msdn.microsoft.com/ie/2011/03/22/smartscreen-application-reputation-building-reputation/).
----------
From https://trac.openmodelica.org/OpenModelica/ticket/4829",1.0,"Windows installers fail SmartScreen checks - When installing OMC on Windows, the SmartScreen filter identifies the OMC installer as suspicious software from unidentified authors, and requires to give explicit consent to perform a potentially dangerous installation.
This may be ok for hardened hackers that know about the OSMC, but it's not projecting an image of quality and dependability on the sofware, particularly for industrial and corporate use. Looking like potential malware is not a very good marketing strategy :)
I would recommend that from 1.13.0 we start signing the installer with a certificate, so that we avoid this kind of problems. More information on how to do this is found [here](https://blogs.msdn.microsoft.com/ie/2011/03/22/smartscreen-application-reputation-building-reputation/).
----------
From https://trac.openmodelica.org/OpenModelica/ticket/4829",0,windows installers fail smartscreen checks when installing omc on windows the smartscreen filter identifies the omc installer as suspicious software from unidentified authors and requires to give explicit consent to perform a potentially dangerous installation this may be ok for hardened hackers that know about the osmc but it s not projecting an image of quality and dependability on the sofware particularly for industrial and corporate use looking like potential malware is not a very good marketing strategy i would recommend that from we start signing the installer with a certificate so that we avoid this kind of problems more information on how to do this is found from ,0
4033,15216571968.0,IssuesEvent,2021-02-17 15:39:15,uiowa/uiowa,https://api.github.com/repos/uiowa/uiowa,reopened,Path alias %files does not work with Drush rsync,automation bug,"```
vagrant@local:/var/www/uiowa$ drush rsync @home.test:%files @home.local:%files
In BackendPathEvaluator.php line 85:
Cannot evaluate path alias %files for site alias @home.test
```",1.0,"Path alias %files does not work with Drush rsync - ```
vagrant@local:/var/www/uiowa$ drush rsync @home.test:%files @home.local:%files
In BackendPathEvaluator.php line 85:
Cannot evaluate path alias %files for site alias @home.test
```",1,path alias files does not work with drush rsync vagrant local var www uiowa drush rsync home test files home local files in backendpathevaluator php line cannot evaluate path alias files for site alias home test ,1
1798,10789362855.0,IssuesEvent,2019-11-05 11:45:08,DevExpress/testcafe,https://api.github.com/repos/DevExpress/testcafe,opened,Download file native dialog in Internet Explorer 11 (IE 11),FREQUENCY: level 1 SYSTEM: automations TYPE: bug support center,"### Steps to Reproduce:
**index.html**
```html
Download
```
**test.js**:
```js
import { Selector } from 'testcafe';
// import fs from 'fs';
fixture `Fixture`
.page `http://localhost:8080/index.html`;
test('test', async t => {
await t
.click(Selector('body > a:nth-child(1)'));
await t.wait(4000);
// const filePath = 'c:\\Users\\name\\Downloads\\file.xlsx';
// await t
// .expect(fs.existsSync(filePath)).ok();
});
```
1. Run `testcafe ie test.js`.
2. See the following result:

### Your Environment details:
* testcafe version: v1.6.0
* browser name and version: IE 11
* platform and version: Windows 10
",1.0,"Download file native dialog in Internet Explorer 11 (IE 11) - ### Steps to Reproduce:
**index.html**
```html
Download
```
**test.js**:
```js
import { Selector } from 'testcafe';
// import fs from 'fs';
fixture `Fixture`
.page `http://localhost:8080/index.html`;
test('test', async t => {
await t
.click(Selector('body > a:nth-child(1)'));
await t.wait(4000);
// const filePath = 'c:\\Users\\name\\Downloads\\file.xlsx';
// await t
// .expect(fs.existsSync(filePath)).ok();
});
```
1. Run `testcafe ie test.js`.
2. See the following result:

### Your Environment details:
* testcafe version: v1.6.0
* browser name and version: IE 11
* platform and version: Windows 10
",1,download file native dialog in internet explorer ie steps to reproduce index html html download test js js import selector from testcafe import fs from fs fixture fixture page test test async t await t click selector body a nth child await t wait const filepath c users name downloads file xlsx await t expect fs existssync filepath ok run testcafe ie test js see the following result your environment details testcafe version browser name and version ie platform and version windows ,1
660924,22036047624.0,IssuesEvent,2022-05-28 15:48:13,ArctosDB/arctos,https://api.github.com/repos/ArctosDB/arctos,closed,degrees latitude and longitude fields missing in bulkloading,Priority-High (Needed for work) Function-DataEntry/Bulkloading Tool - Bulkload Collecting Events,"From @catherpes Andy Johnson of MSB:
Any records that I have in this file are not going in if they have
degrees decimal minutes. This is the most updated version of the file in
Excel from which I generate the csv file to upload.
I thought maybe the field name for degrees had changed so I went back to
the bulkloader builder. I cleared all checks in the whole builder and
then clicked on DM.m Coordinates. Scrolled down to see what fields are
checked an apart from the default Coordinate Metadata, only dec_lat_min
and dec_long_min are checked. There needs to be a degrees latitude and
longitude field for these records
[drawer bulkload6.xlsx](https://github.com/ArctosDB/arctos/files/8790374/drawer.bulkload6.xlsx)
.",1.0,"degrees latitude and longitude fields missing in bulkloading - From @catherpes Andy Johnson of MSB:
Any records that I have in this file are not going in if they have
degrees decimal minutes. This is the most updated version of the file in
Excel from which I generate the csv file to upload.
I thought maybe the field name for degrees had changed so I went back to
the bulkloader builder. I cleared all checks in the whole builder and
then clicked on DM.m Coordinates. Scrolled down to see what fields are
checked an apart from the default Coordinate Metadata, only dec_lat_min
and dec_long_min are checked. There needs to be a degrees latitude and
longitude field for these records
[drawer bulkload6.xlsx](https://github.com/ArctosDB/arctos/files/8790374/drawer.bulkload6.xlsx)
.",0,degrees latitude and longitude fields missing in bulkloading from catherpes andy johnson of msb any records that i have in this file are not going in if they have degrees decimal minutes this is the most updated version of the file in excel from which i generate the csv file to upload i thought maybe the field name for degrees had changed so i went back to the bulkloader builder i cleared all checks in the whole builder and then clicked on dm m coordinates scrolled down to see what fields are checked an apart from the default coordinate metadata only dec lat min and dec long min are checked there needs to be a degrees latitude and longitude field for these records ,0
8331,26734916501.0,IssuesEvent,2023-01-30 08:44:08,submariner-io/releases,https://api.github.com/repos/submariner-io/releases,closed,Releases shouldn’t always be marked as latest,automation,"Releases are currently marked as latest by default, regardless of the branch they come from. This results in get.submariner.io defaulting to the latest chronological release, not the latest version we want end-users to install by default. It also causes upgrade CI jobs to fail, since they install the “latest” release.
See for instance the job failures resulting from the 0.13.2 release — this ended up being the default release, replacing 0.14.0. I fixed the release markers manually but ideally this should be taken care of by the release job (_e.g._ by checking whether there’s a release branch with a higher version and an actual release).",1.0,"Releases shouldn’t always be marked as latest - Releases are currently marked as latest by default, regardless of the branch they come from. This results in get.submariner.io defaulting to the latest chronological release, not the latest version we want end-users to install by default. It also causes upgrade CI jobs to fail, since they install the “latest” release.
See for instance the job failures resulting from the 0.13.2 release — this ended up being the default release, replacing 0.14.0. I fixed the release markers manually but ideally this should be taken care of by the release job (_e.g._ by checking whether there’s a release branch with a higher version and an actual release).",1,releases shouldn’t always be marked as latest releases are currently marked as latest by default regardless of the branch they come from this results in get submariner io defaulting to the latest chronological release not the latest version we want end users to install by default it also causes upgrade ci jobs to fail since they install the “latest” release see for instance the job failures resulting from the release — this ended up being the default release replacing i fixed the release markers manually but ideally this should be taken care of by the release job e g by checking whether there’s a release branch with a higher version and an actual release ,1
2084,11360349953.0,IssuesEvent,2020-01-26 05:56:51,sourcegraph/sourcegraph,https://api.github.com/repos/sourcegraph/sourcegraph,closed,Support running campaigns on > 200 repositories,automation customer,"Customers who ran into issues with this:
- https://app.hubspot.com/contacts/2762526/contact/17877751
- https://app.hubspot.com/contacts/2762526/company/608245740
I couldn't find where we are tracking this functionality - perhaps it is implicit with the introduction of the CLI as the main way to create campaigns, but I'd like to ensure we're watching and addressing this.",1.0,"Support running campaigns on > 200 repositories - Customers who ran into issues with this:
- https://app.hubspot.com/contacts/2762526/contact/17877751
- https://app.hubspot.com/contacts/2762526/company/608245740
I couldn't find where we are tracking this functionality - perhaps it is implicit with the introduction of the CLI as the main way to create campaigns, but I'd like to ensure we're watching and addressing this.",1,support running campaigns on repositories customers who ran into issues with this i couldn t find where we are tracking this functionality perhaps it is implicit with the introduction of the cli as the main way to create campaigns but i d like to ensure we re watching and addressing this ,1
5547,20032421501.0,IssuesEvent,2022-02-02 08:11:17,keptn/keptn,https://api.github.com/repos/keptn/keptn,closed,Integration Tests: Switch to static GKE clusters instead of freshly created ones,type:chore automation refactoring github_actions area:devops,"Right now, the integration tests set up fresh GKE clusters for all tested versions with every run.
To have a more secure and fast setup, we should switch to having static clusters on GKE for all versions that we want to test against. Those should just be cleaned up correctly after each integration test run to be able to properly re-use them in the next run.
Tasks:
- [x] Set up static clusters for all GKE versions that we want to test against
- [x] Ensure that clusters stay on the same k8s version
- [x] Change integration test pipeline to use those clusters for testing
- [ ] Remove now unneeded GH secrets, add kubeconfig(s) for clusters to secrets
- [ ] Add verification steps to the pipeline to ensure that clusters are cleaned up properly after test runs
- [x] Think about using https://codeberg.org/hjacobs/kube-janitor for cleanup so that test setups have a TTL for debugging",1.0,"Integration Tests: Switch to static GKE clusters instead of freshly created ones - Right now, the integration tests set up fresh GKE clusters for all tested versions with every run.
To have a more secure and fast setup, we should switch to having static clusters on GKE for all versions that we want to test against. Those should just be cleaned up correctly after each integration test run to be able to properly re-use them in the next run.
Tasks:
- [x] Set up static clusters for all GKE versions that we want to test against
- [x] Ensure that clusters stay on the same k8s version
- [x] Change integration test pipeline to use those clusters for testing
- [ ] Remove now unneeded GH secrets, add kubeconfig(s) for clusters to secrets
- [ ] Add verification steps to the pipeline to ensure that clusters are cleaned up properly after test runs
- [x] Think about using https://codeberg.org/hjacobs/kube-janitor for cleanup so that test setups have a TTL for debugging",1,integration tests switch to static gke clusters instead of freshly created ones right now the integration tests set up fresh gke clusters for all tested versions with every run to have a more secure and fast setup we should switch to having static clusters on gke for all versions that we want to test against those should just be cleaned up correctly after each integration test run to be able to properly re use them in the next run tasks set up static clusters for all gke versions that we want to test against ensure that clusters stay on the same version change integration test pipeline to use those clusters for testing remove now unneeded gh secrets add kubeconfig s for clusters to secrets add verification steps to the pipeline to ensure that clusters are cleaned up properly after test runs think about using for cleanup so that test setups have a ttl for debugging,1
1993,11222518943.0,IssuesEvent,2020-01-07 20:22:26,bank2ynab/bank2ynab,https://api.github.com/repos/bank2ynab/bank2ynab,closed,Increased project automation (Github Actions),project automation,"**Other feature request**
**Is your feature request related to a problem? Please describe.**
The following issues all reference using hooks to automate certain aspects of the development workflow: #239, #238, #181.
After doing some research, I think the best way to approach these tasks is to actually make use of the deployment abilities of Travis rather than using hooks.
**Describe the solution you'd like**
Let's give Travis superpowers! Essentially, we need to add a ""deployment"" phase to Travis that occurs after a successful round of testing. This could, for example, do the following tasks.
- Bump the version number for every commit to the `master` branch
- Update `README.md` every time a new bank is added to the config
- Automatically fix formatting issues from new commits
- Automatically deploy new versions of the `master` branch to PyPi
We'll need to somehow give Travis an authorisation key, this will be the trickiest part, I think, as it will need to be kept secure. Travis has documentation on encryption, linked below, which will probably be helpful.
**Describe alternatives you've considered**
Either continue to do these tasks manually or use the hooks concept. The problem with the hooks is that these are finicky to install and update. Travis provides a centralised solution.
**Additional context**
***Travis Docs***
Github Releases uploading: https://docs.travis-ci.com/user/deployment/releases/
PyPi Deployment: https://docs.travis-ci.com/user/deployment/pypi/
Encryption Keys: https://docs.travis-ci.com/user/encryption-keys/
",1.0,"Increased project automation (Github Actions) - **Other feature request**
**Is your feature request related to a problem? Please describe.**
The following issues all reference using hooks to automate certain aspects of the development workflow: #239, #238, #181.
After doing some research, I think the best way to approach these tasks is to actually make use of the deployment abilities of Travis rather than using hooks.
**Describe the solution you'd like**
Let's give Travis superpowers! Essentially, we need to add a ""deployment"" phase to Travis that occurs after a successful round of testing. This could, for example, do the following tasks.
- Bump the version number for every commit to the `master` branch
- Update `README.md` every time a new bank is added to the config
- Automatically fix formatting issues from new commits
- Automatically deploy new versions of the `master` branch to PyPi
We'll need to somehow give Travis an authorisation key, this will be the trickiest part, I think, as it will need to be kept secure. Travis has documentation on encryption, linked below, which will probably be helpful.
**Describe alternatives you've considered**
Either continue to do these tasks manually or use the hooks concept. The problem with the hooks is that these are finicky to install and update. Travis provides a centralised solution.
**Additional context**
***Travis Docs***
Github Releases uploading: https://docs.travis-ci.com/user/deployment/releases/
PyPi Deployment: https://docs.travis-ci.com/user/deployment/pypi/
Encryption Keys: https://docs.travis-ci.com/user/encryption-keys/
",1,increased project automation github actions other feature request is your feature request related to a problem please describe the following issues all reference using hooks to automate certain aspects of the development workflow after doing some research i think the best way to approach these tasks is to actually make use of the deployment abilities of travis rather than using hooks describe the solution you d like let s give travis superpowers essentially we need to add a deployment phase to travis that occurs after a successful round of testing this could for example do the following tasks bump the version number for every commit to the master branch update readme md every time a new bank is added to the config automatically fix formatting issues from new commits automatically deploy new versions of the master branch to pypi we ll need to somehow give travis an authorisation key this will be the trickiest part i think as it will need to be kept secure travis has documentation on encryption linked below which will probably be helpful describe alternatives you ve considered either continue to do these tasks manually or use the hooks concept the problem with the hooks is that these are finicky to install and update travis provides a centralised solution additional context travis docs github releases uploading pypi deployment encryption keys ,1
3795,14613675119.0,IssuesEvent,2020-12-22 08:39:07,Tithibots/tithiwa,https://api.github.com/repos/Tithibots/tithiwa,closed,Create delete_chats_of_all_contacts() in contacts.py,Selenium Automation enhancement good first issue python,"do as follow
1. go through all contacts chats same as in [exit_from_all_groups()](https://github.com/Tithibots/tithiwa/blob/a278e4a27af13a8469262ff28328ca74135441eb/tithiwa/group.py#L114)
NOTE: by using [CONTACTS__NAME_IN_CHATS ](https://github.com/Tithibots/tithiwa/blob/0ba6306873121bd3b87e9a53cae780c474586672/tithiwa/constants.py#L29) you can get all chats of contacts.
2. open chat options by using [CHATROOM__OPTIONS](https://github.com/Tithibots/tithiwa/blob/c44c381913171a1b5528b59c598dd679e38b62b0/tithiwa/constants.py#L28)
3. press on `Delete chat` and wait for the chat to be deleted by using [self._close_info()](https://github.com/Tithibots/tithiwa/blob/c44c381913171a1b5528b59c598dd679e38b62b0/tithiwa/waobject.py#L21)

",1.0,"Create delete_chats_of_all_contacts() in contacts.py - do as follow
1. go through all contacts chats same as in [exit_from_all_groups()](https://github.com/Tithibots/tithiwa/blob/a278e4a27af13a8469262ff28328ca74135441eb/tithiwa/group.py#L114)
NOTE: by using [CONTACTS__NAME_IN_CHATS ](https://github.com/Tithibots/tithiwa/blob/0ba6306873121bd3b87e9a53cae780c474586672/tithiwa/constants.py#L29) you can get all chats of contacts.
2. open chat options by using [CHATROOM__OPTIONS](https://github.com/Tithibots/tithiwa/blob/c44c381913171a1b5528b59c598dd679e38b62b0/tithiwa/constants.py#L28)
3. press on `Delete chat` and wait for the chat to be deleted by using [self._close_info()](https://github.com/Tithibots/tithiwa/blob/c44c381913171a1b5528b59c598dd679e38b62b0/tithiwa/waobject.py#L21)

",1,create delete chats of all contacts in contacts py do as follow go through all contacts chats same as in note by using you can get all chats of contacts open chat options by using press on delete chat and wait for the chat to be deleted by using ,1
3363,13563892785.0,IssuesEvent,2020-09-18 09:13:53,SatelliteQE/airgun,https://api.github.com/repos/SatelliteQE/airgun,opened,"DiscoveryRule view uses different locator for ""search"" field",Automation failure,"there's no longer an `@id` attribute, but rather `name`.",1.0,"DiscoveryRule view uses different locator for ""search"" field - there's no longer an `@id` attribute, but rather `name`.",1,discoveryrule view uses different locator for search field there s no longer an id attribute but rather name ,1
7194,24384902165.0,IssuesEvent,2022-10-04 10:54:10,ZhengqiaoWang/blog-comment,https://api.github.com/repos/ZhengqiaoWang/blog-comment,opened,自动化办公UI模块 | 王政乔,gitalk /office_automation/自动化办公UI.html,"https://www.zhengqiao.wang/office_automation/%E8%87%AA%E5%8A%A8%E5%8C%96%E5%8A%9E%E5%85%ACUI.html
自动化办公系列:这个是我用来帮助广大不怎么了解Python但又希望通过使用Python实现自动化办公的系列。这个模块能帮助用户快速地处理构建界面,可以满足基本的输入、文件选择和提示。根据下面的教程提示,可以帮助你快速的实现一些简单的处理小工具,而不需要吭哧吭哧地在命令行上敲来敲去。",1.0,"自动化办公UI模块 | 王政乔 - https://www.zhengqiao.wang/office_automation/%E8%87%AA%E5%8A%A8%E5%8C%96%E5%8A%9E%E5%85%ACUI.html
自动化办公系列:这个是我用来帮助广大不怎么了解Python但又希望通过使用Python实现自动化办公的系列。这个模块能帮助用户快速地处理构建界面,可以满足基本的输入、文件选择和提示。根据下面的教程提示,可以帮助你快速的实现一些简单的处理小工具,而不需要吭哧吭哧地在命令行上敲来敲去。",1,自动化办公ui模块 王政乔 自动化办公系列:这个是我用来帮助广大不怎么了解python但又希望通过使用python实现自动化办公的系列。这个模块能帮助用户快速地处理构建界面,可以满足基本的输入、文件选择和提示。根据下面的教程提示,可以帮助你快速的实现一些简单的处理小工具,而不需要吭哧吭哧地在命令行上敲来敲去。,1
82745,15679669325.0,IssuesEvent,2021-03-25 01:03:44,bci-oss/keycloak,https://api.github.com/repos/bci-oss/keycloak,opened,CVE-2019-12418 (High) detected in tomcat-catalina-7.0.92.jar,security vulnerability,"## CVE-2019-12418 - High Severity Vulnerability
Vulnerable Library - tomcat-catalina-7.0.92.jar
Tomcat Servlet Engine Core Classes and Standard implementations
Path to dependency file: keycloak/adapters/oidc/tomcat/tomcat7/pom.xml
Path to vulnerable library: canner/.m2/repository/org/apache/tomcat/tomcat-catalina/7.0.92/tomcat-catalina-7.0.92.jar,canner/.m2/repository/org/apache/tomcat/tomcat-catalina/7.0.92/tomcat-catalina-7.0.92.jar,canner/.m2/repository/org/apache/tomcat/tomcat-catalina/7.0.92/tomcat-catalina-7.0.92.jar,canner/.m2/repository/org/apache/tomcat/tomcat-catalina/7.0.92/tomcat-catalina-7.0.92.jar,canner/.m2/repository/org/apache/tomcat/tomcat-catalina/7.0.92/tomcat-catalina-7.0.92.jar
When Apache Tomcat 9.0.0.M1 to 9.0.28, 8.5.0 to 8.5.47, 7.0.0 and 7.0.97 is configured with the JMX Remote Lifecycle Listener, a local attacker without access to the Tomcat process or configuration files is able to manipulate the RMI registry to perform a man-in-the-middle attack to capture user names and passwords used to access the JMX interface. The attacker can then use these credentials to access the JMX interface and gain complete control over the Tomcat instance.
",True,"CVE-2019-12418 (High) detected in tomcat-catalina-7.0.92.jar - ## CVE-2019-12418 - High Severity Vulnerability
Vulnerable Library - tomcat-catalina-7.0.92.jar
Tomcat Servlet Engine Core Classes and Standard implementations
Path to dependency file: keycloak/adapters/oidc/tomcat/tomcat7/pom.xml
Path to vulnerable library: canner/.m2/repository/org/apache/tomcat/tomcat-catalina/7.0.92/tomcat-catalina-7.0.92.jar,canner/.m2/repository/org/apache/tomcat/tomcat-catalina/7.0.92/tomcat-catalina-7.0.92.jar,canner/.m2/repository/org/apache/tomcat/tomcat-catalina/7.0.92/tomcat-catalina-7.0.92.jar,canner/.m2/repository/org/apache/tomcat/tomcat-catalina/7.0.92/tomcat-catalina-7.0.92.jar,canner/.m2/repository/org/apache/tomcat/tomcat-catalina/7.0.92/tomcat-catalina-7.0.92.jar
When Apache Tomcat 9.0.0.M1 to 9.0.28, 8.5.0 to 8.5.47, 7.0.0 and 7.0.97 is configured with the JMX Remote Lifecycle Listener, a local attacker without access to the Tomcat process or configuration files is able to manipulate the RMI registry to perform a man-in-the-middle attack to capture user names and passwords used to access the JMX interface. The attacker can then use these credentials to access the JMX interface and gain complete control over the Tomcat instance.
",0,cve high detected in tomcat catalina jar cve high severity vulnerability vulnerable library tomcat catalina jar tomcat servlet engine core classes and standard implementations path to dependency file keycloak adapters oidc tomcat pom xml path to vulnerable library canner repository org apache tomcat tomcat catalina tomcat catalina jar canner repository org apache tomcat tomcat catalina tomcat catalina jar canner repository org apache tomcat tomcat catalina tomcat catalina jar canner repository org apache tomcat tomcat catalina tomcat catalina jar canner repository org apache tomcat tomcat catalina tomcat catalina jar dependency hierarchy x tomcat catalina jar vulnerable library found in base branch master vulnerability details when apache tomcat to to and is configured with the jmx remote lifecycle listener a local attacker without access to the tomcat process or configuration files is able to manipulate the rmi registry to perform a man in the middle attack to capture user names and passwords used to access the jmx interface the attacker can then use these credentials to access the jmx interface and gain complete control over the tomcat instance publish date url a href cvss score details base score metrics exploitability metrics attack vector local attack complexity high privileges required low user interaction none scope unchanged impact metrics confidentiality impact high integrity impact high availability impact high for more information on scores click a href suggested fix type upgrade version origin a href release date fix resolution org apache tomcat tomcat catalina org apache tomcat tomcat catalina org apache tomcat tomcat catalina org apache tomcat embed tomcat embed core isopenpronvulnerability true ispackagebased true isdefaultbranch true packages istransitivedependency false dependencytree org apache tomcat tomcat catalina isminimumfixversionavailable true minimumfixversion org apache tomcat tomcat catalina org apache tomcat tomcat catalina org apache tomcat tomcat catalina org apache tomcat embed tomcat embed core basebranches vulnerabilityidentifier cve vulnerabilitydetails when apache tomcat to to and is configured with the jmx remote lifecycle listener a local attacker without access to the tomcat process or configuration files is able to manipulate the rmi registry to perform a man in the middle attack to capture user names and passwords used to access the jmx interface the attacker can then use these credentials to access the jmx interface and gain complete control over the tomcat instance vulnerabilityurl ,0
9194,27712647654.0,IssuesEvent,2023-03-14 15:06:56,githubcustomers/discovery.co.za,https://api.github.com/repos/githubcustomers/discovery.co.za,opened,Task Eight: Compare Other SAST and CodeQL Results,ghas-trial automation Important,"# Task Eight: Compare Other SAST and CodeQL Results
CodeQL Defaults results are **very** precise.
We advocate comparing results to other SAST tools. Still, when comparing, we recommended that a minimum threshold of `security-extended` be used for comparison, but `security-and-quality` will yield maximum results.
When comparing results from other SAST tools, look at the **quality** of the responses back, not the number. Remember, if your current SAST tool returns 20 vulnerabilities, that doesn't mean that 20 need to be fixed. The higher the number of vulnerabilities, the longer it will take a developer to look through the data to understand false positives versus correct matches.
Code Scanning is precise, which means the results from the default pack should be accurate and of high quality, meaning less time spent understanding false positives and quicker delivery time for your business whilst staying as secure!
Additionally, a developer will be more likely to properly look through data of a tool that returns streamlined and high-quality results than a wide casting tool and may be wasting their time. Meaning hopefully, you are going to be more secure as you are increasing your adoption of security.
",1.0,"Task Eight: Compare Other SAST and CodeQL Results - # Task Eight: Compare Other SAST and CodeQL Results
CodeQL Defaults results are **very** precise.
We advocate comparing results to other SAST tools. Still, when comparing, we recommended that a minimum threshold of `security-extended` be used for comparison, but `security-and-quality` will yield maximum results.
When comparing results from other SAST tools, look at the **quality** of the responses back, not the number. Remember, if your current SAST tool returns 20 vulnerabilities, that doesn't mean that 20 need to be fixed. The higher the number of vulnerabilities, the longer it will take a developer to look through the data to understand false positives versus correct matches.
Code Scanning is precise, which means the results from the default pack should be accurate and of high quality, meaning less time spent understanding false positives and quicker delivery time for your business whilst staying as secure!
Additionally, a developer will be more likely to properly look through data of a tool that returns streamlined and high-quality results than a wide casting tool and may be wasting their time. Meaning hopefully, you are going to be more secure as you are increasing your adoption of security.
",1,task eight compare other sast and codeql results task eight compare other sast and codeql results codeql defaults results are very precise we advocate comparing results to other sast tools still when comparing we recommended that a minimum threshold of security extended be used for comparison but security and quality will yield maximum results when comparing results from other sast tools look at the quality of the responses back not the number remember if your current sast tool returns vulnerabilities that doesn t mean that need to be fixed the higher the number of vulnerabilities the longer it will take a developer to look through the data to understand false positives versus correct matches code scanning is precise which means the results from the default pack should be accurate and of high quality meaning less time spent understanding false positives and quicker delivery time for your business whilst staying as secure additionally a developer will be more likely to properly look through data of a tool that returns streamlined and high quality results than a wide casting tool and may be wasting their time meaning hopefully you are going to be more secure as you are increasing your adoption of security ,1
545340,15948599653.0,IssuesEvent,2021-04-15 06:08:37,openshift/odo,https://api.github.com/repos/openshift/odo,closed,broken odo link,priority/Critical release-blocker,"```
▶ odo service list
NAME AGE
MariaDB/mariadb 17m10s
▶ odo link MariaDB/mariadb
✓ Successfully created link between component ""springboot"" and service ""MariaDB/mariadb""
To apply the link, please use `odo push`
▶ odo push
✓ Waiting for component to start [9ms]
Validation
✓ Validating the devfile [23800ns]
Creating Kubernetes resources for component springboot
✗ Failed to start component with name springboot. Error: Failed to create the component: unable to create or update component: pvc not found for mount path springboot-mariadb-mariadb
```
using SBO 0.7.0
```
▶ odo version
odo v2.1.0 (6040118b0)
```
/priority critical
",1.0,"broken odo link - ```
▶ odo service list
NAME AGE
MariaDB/mariadb 17m10s
▶ odo link MariaDB/mariadb
✓ Successfully created link between component ""springboot"" and service ""MariaDB/mariadb""
To apply the link, please use `odo push`
▶ odo push
✓ Waiting for component to start [9ms]
Validation
✓ Validating the devfile [23800ns]
Creating Kubernetes resources for component springboot
✗ Failed to start component with name springboot. Error: Failed to create the component: unable to create or update component: pvc not found for mount path springboot-mariadb-mariadb
```
using SBO 0.7.0
```
▶ odo version
odo v2.1.0 (6040118b0)
```
/priority critical
",0,broken odo link ▶ odo service list name age mariadb mariadb ▶ odo link mariadb mariadb ✓ successfully created link between component springboot and service mariadb mariadb to apply the link please use odo push ▶ odo push ✓ waiting for component to start validation ✓ validating the devfile creating kubernetes resources for component springboot ✗ failed to start component with name springboot error failed to create the component unable to create or update component pvc not found for mount path springboot mariadb mariadb using sbo ▶ odo version odo priority critical ,0
682,7785589596.0,IssuesEvent,2018-06-06 16:15:38,pypa/pip,https://api.github.com/repos/pypa/pip,closed,Skipping CI when code doesn't change,C: automation T: DevOps,"I think it would be useful if pip's CI skipped running tests when a change doesn't really modify any code. This would mean that documentation changes and news-file updates would not result in a 40 minute complete CI run, just a short sweet one. ^.^
If a changeset does not touch any file within `pip/` or `tests/`, test run would be skipped but the linting and likes would still run.
FWIW, [cpython does it](https://github.com/python/cpython/blob/master/.travis.yml#L53) on Travis CI.
---
Should I investigate further into this - seeing if can be done for pip? ",1.0,"Skipping CI when code doesn't change - I think it would be useful if pip's CI skipped running tests when a change doesn't really modify any code. This would mean that documentation changes and news-file updates would not result in a 40 minute complete CI run, just a short sweet one. ^.^
If a changeset does not touch any file within `pip/` or `tests/`, test run would be skipped but the linting and likes would still run.
FWIW, [cpython does it](https://github.com/python/cpython/blob/master/.travis.yml#L53) on Travis CI.
---
Should I investigate further into this - seeing if can be done for pip? ",1,skipping ci when code doesn t change i think it would be useful if pip s ci skipped running tests when a change doesn t really modify any code this would mean that documentation changes and news file updates would not result in a minute complete ci run just a short sweet one if a changeset does not touch any file within pip or tests test run would be skipped but the linting and likes would still run fwiw on travis ci should i investigate further into this seeing if can be done for pip ,1
9740,30462406588.0,IssuesEvent,2023-07-17 07:59:10,DevExpress/testcafe,https://api.github.com/repos/DevExpress/testcafe,closed,Testcafe hangs when interacting with login page,TYPE: bug FREQUENCY: level 1 SYSTEM: native automation,"### What is your Scenario?
We are facing an issue within our TestCafe tests where they hang when interacting with the login page. TestCafe does not reach its assertion or selector timeout, instead it just completely stops.
Please see example.
### What is the Current behavior?
TestCafe hangs after performing an action and does not move onto next action.
### What is the Expected behavior?
TestCafe to either move onto next action or timeout.
### What is your public website URL? (or attach your complete example)
economist.com
### What is your TestCafe test code?
// cookie-consent.js //
```
import { RequestMock } from 'testcafe';
// Mock Evidon cookie consent to avoid interacting with the module on each new session
export function mockEvidonCookieConsent() {
return RequestMock()
.onRequestTo(/evidon\.com\//)
.respond('', 200);
}
export function mockSourcepointCookieConsent() {
return RequestMock()
.onRequestTo(/cmp-cdn\.p\.aws\.economist\.com\/latest\/cmp\.min\.js/)
.respond('', 200);
}
```
// example.js //
```
import { fixture , Selector } from 'testcafe';
import { xpathSelector} from './helpers';
import {
mockEvidonCookieConsent,
mockSourcepointCookieConsent,
} from './cookie-consent';
fixture `example fixture`
.page('economist.com')
.requestHooks([mockEvidonCookieConsent(), mockSourcepointCookieConsent()])
const emailField = xpathSelector('//*[@type=""text""]');
const passwordField = xpathSelector('//*[@type=""password""]');
const loginLink = Selector('.ds-masthead').find('a').withText('Log in').filterVisible();
test('example', async t => {
await t.click(loginLink)
await t.typeText(emailField, 'email@example.com')
console.log('email entered')
await t.typeText(passwordField, 'password')
console.log('password entered')
});
test('example 2', async t => {
await t.click(loginLink)
await t.typeText(emailField, 'email@example.com')
console.log('email entered')
await t.typeText(passwordField, 'password')
console.log('password entered')
});
test('example 3', async t => {
await t.click(loginLink)
await t.typeText(emailField, 'email@example.com')
console.log('email entered')
await t.typeText(passwordField, 'password')
console.log('password entered')
});
```
// helpers.js //
```
import { Selector } from 'testcafe';
/**
* Retrieves all elements that match a given xpath.
* @param {string} xpath - xpath to search with
* @return {Object} found elements
*/
const getElementsByXPath = Selector(xpath => {
const iterator = document.evaluate(
xpath,
document,
null,
XPathResult.UNORDERED_NODE_ITERATOR_TYPE,
null,
);
const items = [];
let item = iterator.iterateNext();
while (item) {
items.push(item);
item = iterator.iterateNext();
}
return items;
});
/**
* Create a selector based on a xpath. Testcafe does not natively support xpath
* selectors, hence this function.
* @param {string} xpath - xpath to search with
* @returns {Selector} returns a selector
*/
export const xpathSelector = xpath => {
return Selector(getElementsByXPath(xpath));
};
```
### Your complete configuration file
_No response_
### Your complete test report
None, testcafe hangs and never finishes.
### Screenshots
### Steps to Reproduce
1. Run example.js
2. Notice that TestCafe will hang
### TestCafe version
2.5.1-rc.1
### Node.js version
16.20
### Command-line arguments
testcafe chrome example.js --skip-js-errors
### Browser name(s) and version(s)
Chrome
### Platform(s) and version(s)
macOS
### Other
_No response_",1.0,"Testcafe hangs when interacting with login page - ### What is your Scenario?
We are facing an issue within our TestCafe tests where they hang when interacting with the login page. TestCafe does not reach its assertion or selector timeout, instead it just completely stops.
Please see example.
### What is the Current behavior?
TestCafe hangs after performing an action and does not move onto next action.
### What is the Expected behavior?
TestCafe to either move onto next action or timeout.
### What is your public website URL? (or attach your complete example)
economist.com
### What is your TestCafe test code?
// cookie-consent.js //
```
import { RequestMock } from 'testcafe';
// Mock Evidon cookie consent to avoid interacting with the module on each new session
export function mockEvidonCookieConsent() {
return RequestMock()
.onRequestTo(/evidon\.com\//)
.respond('', 200);
}
export function mockSourcepointCookieConsent() {
return RequestMock()
.onRequestTo(/cmp-cdn\.p\.aws\.economist\.com\/latest\/cmp\.min\.js/)
.respond('', 200);
}
```
// example.js //
```
import { fixture , Selector } from 'testcafe';
import { xpathSelector} from './helpers';
import {
mockEvidonCookieConsent,
mockSourcepointCookieConsent,
} from './cookie-consent';
fixture `example fixture`
.page('economist.com')
.requestHooks([mockEvidonCookieConsent(), mockSourcepointCookieConsent()])
const emailField = xpathSelector('//*[@type=""text""]');
const passwordField = xpathSelector('//*[@type=""password""]');
const loginLink = Selector('.ds-masthead').find('a').withText('Log in').filterVisible();
test('example', async t => {
await t.click(loginLink)
await t.typeText(emailField, 'email@example.com')
console.log('email entered')
await t.typeText(passwordField, 'password')
console.log('password entered')
});
test('example 2', async t => {
await t.click(loginLink)
await t.typeText(emailField, 'email@example.com')
console.log('email entered')
await t.typeText(passwordField, 'password')
console.log('password entered')
});
test('example 3', async t => {
await t.click(loginLink)
await t.typeText(emailField, 'email@example.com')
console.log('email entered')
await t.typeText(passwordField, 'password')
console.log('password entered')
});
```
// helpers.js //
```
import { Selector } from 'testcafe';
/**
* Retrieves all elements that match a given xpath.
* @param {string} xpath - xpath to search with
* @return {Object} found elements
*/
const getElementsByXPath = Selector(xpath => {
const iterator = document.evaluate(
xpath,
document,
null,
XPathResult.UNORDERED_NODE_ITERATOR_TYPE,
null,
);
const items = [];
let item = iterator.iterateNext();
while (item) {
items.push(item);
item = iterator.iterateNext();
}
return items;
});
/**
* Create a selector based on a xpath. Testcafe does not natively support xpath
* selectors, hence this function.
* @param {string} xpath - xpath to search with
* @returns {Selector} returns a selector
*/
export const xpathSelector = xpath => {
return Selector(getElementsByXPath(xpath));
};
```
### Your complete configuration file
_No response_
### Your complete test report
None, testcafe hangs and never finishes.
### Screenshots
### Steps to Reproduce
1. Run example.js
2. Notice that TestCafe will hang
### TestCafe version
2.5.1-rc.1
### Node.js version
16.20
### Command-line arguments
testcafe chrome example.js --skip-js-errors
### Browser name(s) and version(s)
Chrome
### Platform(s) and version(s)
macOS
### Other
_No response_",1,testcafe hangs when interacting with login page what is your scenario we are facing an issue within our testcafe tests where they hang when interacting with the login page testcafe does not reach its assertion or selector timeout instead it just completely stops please see example what is the current behavior testcafe hangs after performing an action and does not move onto next action what is the expected behavior testcafe to either move onto next action or timeout what is your public website url or attach your complete example economist com what is your testcafe test code cookie consent js import requestmock from testcafe mock evidon cookie consent to avoid interacting with the module on each new session export function mockevidoncookieconsent return requestmock onrequestto evidon com respond export function mocksourcepointcookieconsent return requestmock onrequestto cmp cdn p aws economist com latest cmp min js respond example js import fixture selector from testcafe import xpathselector from helpers import mockevidoncookieconsent mocksourcepointcookieconsent from cookie consent fixture example fixture page economist com requesthooks const emailfield xpathselector const passwordfield xpathselector const loginlink selector ds masthead find a withtext log in filtervisible test example async t await t click loginlink await t typetext emailfield email example com console log email entered await t typetext passwordfield password console log password entered test example async t await t click loginlink await t typetext emailfield email example com console log email entered await t typetext passwordfield password console log password entered test example async t await t click loginlink await t typetext emailfield email example com console log email entered await t typetext passwordfield password console log password entered helpers js import selector from testcafe retrieves all elements that match a given xpath param string xpath xpath to search with return object found elements const getelementsbyxpath selector xpath const iterator document evaluate xpath document null xpathresult unordered node iterator type null const items let item iterator iteratenext while item items push item item iterator iteratenext return items create a selector based on a xpath testcafe does not natively support xpath selectors hence this function param string xpath xpath to search with returns selector returns a selector export const xpathselector xpath return selector getelementsbyxpath xpath your complete configuration file no response your complete test report none testcafe hangs and never finishes img width alt screenshot at src screenshots img width alt screenshot at src img width alt screenshot at src steps to reproduce run example js notice that testcafe will hang testcafe version rc node js version command line arguments testcafe chrome example js skip js errors browser name s and version s chrome platform s and version s macos other no response ,1
324923,24024680923.0,IssuesEvent,2022-09-15 10:28:23,ita-social-projects/dokazovi-requirements,https://api.github.com/repos/ita-social-projects/dokazovi-requirements,opened,[Test for Story #604] Verify that admin can't schedule the material with an invalid date/time or without confirmation,documentation test case,"**Story link**
[#604 Story](https://github.com/ita-social-projects/dokazovi-requirements/issues/604#issue-1344414724)
### Status:
Pass/Fail/Not executed
### Title:
Verify that admin can't schedule the material with an invalid date/time or without confirmation
### Description:
Verify that admin is not able to schedule the material with an invalid entered date and time entered or when the admin doen't confirm chosen date and time and the material’s status doesn't change
### Pre-conditions:
The admin is logged in
Адміністрування → Керування матеріалами → material with <На модерації> status or <В архіві> status→Дії → Запланувати публікацію
Step № | Test Steps | Test data | Expected result | Status (Pass/Fail/Not executed) | Notes
------------ | ------------ | ------------ | ------------ | ------------ | ------------
1 | Click on the Date&Time picker component  and select the date and time | | The date and time are selected and shown. The selected date and time is validated automatically by the system. The date - in dd.mm.yyyy format. The time - in hh:mm format |Not executed| Mockup
2 | Click on the 'Ні' button| | Date&Time picker component is closed and the changes are not saved | Not executed|
3 | Entered a valid date and time in the Date Time picker component and repeat the step | | The date and time are entered and shown. The entered date and time is validated automatically by the system. The date - in dd.mm.yyyy format. The time - in hh:mm format. |Not executed| Mockup
4 | Click on the 'Ні' button| | Date&Time picker component is closed and the changes are not saved | Not executed|
5 | Entered an invalid date/time in the Date Time picker component and click on the 'Так' button | | The system validates the data and shows the error message: 'Введіть коректну дату та час' below the 'Обрати дату та час' field |Not executed|


### Dependencies:
[#604](https://github.com/ita-social-projects/dokazovi-requirements/issues/604#issue-1344414724)
### [Gantt Chart](https://docs.google.com/spreadsheets/d/1bgaEJDOf3OhfNRfP-WWPKmmZFW5C3blOUxamE3wSCbM/edit#gid=775577959)
",1.0,"[Test for Story #604] Verify that admin can't schedule the material with an invalid date/time or without confirmation - **Story link**
[#604 Story](https://github.com/ita-social-projects/dokazovi-requirements/issues/604#issue-1344414724)
### Status:
Pass/Fail/Not executed
### Title:
Verify that admin can't schedule the material with an invalid date/time or without confirmation
### Description:
Verify that admin is not able to schedule the material with an invalid entered date and time entered or when the admin doen't confirm chosen date and time and the material’s status doesn't change
### Pre-conditions:
The admin is logged in
Адміністрування → Керування матеріалами → material with <На модерації> status or <В архіві> status→Дії → Запланувати публікацію
Step № | Test Steps | Test data | Expected result | Status (Pass/Fail/Not executed) | Notes
------------ | ------------ | ------------ | ------------ | ------------ | ------------
1 | Click on the Date&Time picker component  and select the date and time | | The date and time are selected and shown. The selected date and time is validated automatically by the system. The date - in dd.mm.yyyy format. The time - in hh:mm format |Not executed| Mockup
2 | Click on the 'Ні' button| | Date&Time picker component is closed and the changes are not saved | Not executed|
3 | Entered a valid date and time in the Date Time picker component and repeat the step | | The date and time are entered and shown. The entered date and time is validated automatically by the system. The date - in dd.mm.yyyy format. The time - in hh:mm format. |Not executed| Mockup
4 | Click on the 'Ні' button| | Date&Time picker component is closed and the changes are not saved | Not executed|
5 | Entered an invalid date/time in the Date Time picker component and click on the 'Так' button | | The system validates the data and shows the error message: 'Введіть коректну дату та час' below the 'Обрати дату та час' field |Not executed|


### Dependencies:
[#604](https://github.com/ita-social-projects/dokazovi-requirements/issues/604#issue-1344414724)
### [Gantt Chart](https://docs.google.com/spreadsheets/d/1bgaEJDOf3OhfNRfP-WWPKmmZFW5C3blOUxamE3wSCbM/edit#gid=775577959)
",0, verify that admin can t schedule the material with an invalid date time or without confirmation story link status pass fail not executed title verify that admin can t schedule the material with an invalid date time or without confirmation description verify that admin is not able to schedule the material with an invalid entered date and time entered or when the admin doen t confirm chosen date and time and the material’s status doesn t change pre conditions the admin is logged in адміністрування → керування матеріалами → material with status or status→дії → запланувати публікацію step № test steps test data expected result status pass fail not executed notes click on the date time picker component and select the date and time the date and time are selected and shown the selected date and time is validated automatically by the system the date in dd mm yyyy format the time in hh mm format not executed mockup click on the ні button date time picker component is closed and the changes are not saved not executed entered a valid date and time in the date time picker component and repeat the step the date and time are entered and shown the entered date and time is validated automatically by the system the date in dd mm yyyy format the time in hh mm format not executed mockup click on the ні button date time picker component is closed and the changes are not saved not executed entered an invalid date time in the date time picker component and click on the так button the system validates the data and shows the error message введіть коректну дату та час below the обрати дату та час field not executed dependencies ,0
584259,17409860674.0,IssuesEvent,2021-08-03 10:52:55,AtlasOfLivingAustralia/collectory,https://api.github.com/repos/AtlasOfLivingAustralia/collectory,closed,Dynamic representation of partner profile links,enhancement priority-medium status-new type-task,"_migrated from:_ https://code.google.com/p/ala/issues/detail?id=711
_date:_ Wed Jun 18 21:20:27 2014
_author:_ alau...@gmail.com
---
Currently the links to partner profiles are static. A more dynamic representation to draw attention to profiles from the home page is preferred.
",1.0,"Dynamic representation of partner profile links - _migrated from:_ https://code.google.com/p/ala/issues/detail?id=711
_date:_ Wed Jun 18 21:20:27 2014
_author:_ alau...@gmail.com
---
Currently the links to partner profiles are static. A more dynamic representation to draw attention to profiles from the home page is preferred.
",0,dynamic representation of partner profile links migrated from date wed jun author alau gmail com currently the links to partner profiles are static a more dynamic representation to draw attention to profiles from the home page is preferred ,0
168328,13079801608.0,IssuesEvent,2020-08-01 04:53:13,cockroachdb/cockroach,https://api.github.com/repos/cockroachdb/cockroach,closed,roachtest: hibernate failed,C-test-failure O-roachtest O-robot branch-provisional_202007220233_v20.2.0-alpha.2 release-blocker,"[(roachtest).hibernate failed](https://teamcity.cockroachdb.com/viewLog.html?buildId=2107811&tab=buildLog) on [provisional_202007220233_v20.2.0-alpha.2@d3119926d33d808c6384cf3e99a7f7435f395489](https://github.com/cockroachdb/cockroach/commits/d3119926d33d808c6384cf3e99a7f7435f395489):
```
The test failed on branch=provisional_202007220233_v20.2.0-alpha.2, cloud=gce:
test artifacts and logs in: /home/agent/work/.go/src/github.com/cockroachdb/cockroach/artifacts/hibernate/run_1
orm_helpers.go:214,orm_helpers.go:144,java_helpers.go:216,hibernate.go:173,hibernate.go:185,test_runner.go:757:
Tests run on Cockroach v20.2.0-alpha.1-933-gd3119926d3
Tests run against hibernate HHH-13724-cockroachdb-dialects
8322 Total Tests Run
8321 tests passed
1 test failed
1951 tests skipped
0 tests ignored
0 tests passed unexpectedly
1 test failed unexpectedly
0 tests expected failed but skipped
0 tests expected failed but not run
---
--- FAIL: org.hibernate.jpa.test.graphs.EntityGraphTest.attributeNodeInheritanceTest - unknown (unexpected)
For a full summary look at the hibernate artifacts
An updated blocklist (hibernateBlockList20_2) is available in the artifacts' hibernate log
```
More
Artifacts: [/hibernate](https://teamcity.cockroachdb.com/viewLog.html?buildId=2107811&tab=artifacts#/hibernate)
[See this test on roachdash](https://roachdash.crdb.dev/?filter=status%3Aopen+t%3A.%2Ahibernate.%2A&sort=title&restgroup=false&display=lastcommented+project)
powered by [pkg/cmd/internal/issues](https://github.com/cockroachdb/cockroach/tree/master/pkg/cmd/internal/issues)
",2.0,"roachtest: hibernate failed - [(roachtest).hibernate failed](https://teamcity.cockroachdb.com/viewLog.html?buildId=2107811&tab=buildLog) on [provisional_202007220233_v20.2.0-alpha.2@d3119926d33d808c6384cf3e99a7f7435f395489](https://github.com/cockroachdb/cockroach/commits/d3119926d33d808c6384cf3e99a7f7435f395489):
```
The test failed on branch=provisional_202007220233_v20.2.0-alpha.2, cloud=gce:
test artifacts and logs in: /home/agent/work/.go/src/github.com/cockroachdb/cockroach/artifacts/hibernate/run_1
orm_helpers.go:214,orm_helpers.go:144,java_helpers.go:216,hibernate.go:173,hibernate.go:185,test_runner.go:757:
Tests run on Cockroach v20.2.0-alpha.1-933-gd3119926d3
Tests run against hibernate HHH-13724-cockroachdb-dialects
8322 Total Tests Run
8321 tests passed
1 test failed
1951 tests skipped
0 tests ignored
0 tests passed unexpectedly
1 test failed unexpectedly
0 tests expected failed but skipped
0 tests expected failed but not run
---
--- FAIL: org.hibernate.jpa.test.graphs.EntityGraphTest.attributeNodeInheritanceTest - unknown (unexpected)
For a full summary look at the hibernate artifacts
An updated blocklist (hibernateBlockList20_2) is available in the artifacts' hibernate log
```
More
Artifacts: [/hibernate](https://teamcity.cockroachdb.com/viewLog.html?buildId=2107811&tab=artifacts#/hibernate)
[See this test on roachdash](https://roachdash.crdb.dev/?filter=status%3Aopen+t%3A.%2Ahibernate.%2A&sort=title&restgroup=false&display=lastcommented+project)
powered by [pkg/cmd/internal/issues](https://github.com/cockroachdb/cockroach/tree/master/pkg/cmd/internal/issues)
",0,roachtest hibernate failed on the test failed on branch provisional alpha cloud gce test artifacts and logs in home agent work go src github com cockroachdb cockroach artifacts hibernate run orm helpers go orm helpers go java helpers go hibernate go hibernate go test runner go tests run on cockroach alpha tests run against hibernate hhh cockroachdb dialects total tests run tests passed test failed tests skipped tests ignored tests passed unexpectedly test failed unexpectedly tests expected failed but skipped tests expected failed but not run fail org hibernate jpa test graphs entitygraphtest attributenodeinheritancetest unknown unexpected for a full summary look at the hibernate artifacts an updated blocklist is available in the artifacts hibernate log more artifacts powered by ,0
89499,15829643748.0,IssuesEvent,2021-04-06 11:28:15,VivekBuzruk/UI,https://api.github.com/repos/VivekBuzruk/UI,closed,CVE-2021-23337 (High) detected in lodash-4.17.20.tgz - autoclosed,security vulnerability,"## CVE-2021-23337 - High Severity Vulnerability
Vulnerable Library - lodash-4.17.20.tgz
***
Step up your Open Source Security Game with WhiteSource [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)",True,"CVE-2021-23337 (High) detected in lodash-4.17.20.tgz - autoclosed - ## CVE-2021-23337 - High Severity Vulnerability
Vulnerable Library - lodash-4.17.20.tgz
***
Step up your Open Source Security Game with WhiteSource [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)",0,cve high detected in lodash tgz autoclosed cve high severity vulnerability vulnerable library lodash tgz lodash modular utilities library home page a href path to dependency file ui package json path to vulnerable library ui node modules lodash package json dependency hierarchy x lodash tgz vulnerable library found in head commit a href found in base branch master vulnerability details lodash versions prior to are vulnerable to command injection via the template function publish date url a href cvss score details base score metrics exploitability metrics attack vector network attack complexity low privileges required high user interaction none scope unchanged impact metrics confidentiality impact high integrity impact high availability impact high for more information on scores click a href suggested fix type upgrade version origin a href release date fix resolution lodash step up your open source security game with whitesource ,0
108894,9335204581.0,IssuesEvent,2019-03-28 18:00:00,istio/istio,https://api.github.com/repos/istio/istio,closed,Helm repo for 1.1.1 uses wrong hub,area/test and release,"`hub: gcr.io/istio-release` should be `hub: docker.io/istio`.
1.1.0 is correct, 1.1.1 is wrong.
https://storage.googleapis.com/istio-release/releases/1.1.1/charts/istio-1.1.1.tgz
@mandarjog @utka ",1.0,"Helm repo for 1.1.1 uses wrong hub - `hub: gcr.io/istio-release` should be `hub: docker.io/istio`.
1.1.0 is correct, 1.1.1 is wrong.
https://storage.googleapis.com/istio-release/releases/1.1.1/charts/istio-1.1.1.tgz
@mandarjog @utka ",0,helm repo for uses wrong hub hub gcr io istio release should be hub docker io istio is correct is wrong mandarjog utka ,0
51105,12678391853.0,IssuesEvent,2020-06-19 09:41:23,opencv/opencv,https://api.github.com/repos/opencv/opencv,closed,[imgproc] Compilation error caused by color_yuv.simd.hpp,category: build/install platform: win32,"##### System information (version)
- OpenCV => 4.4.0
- Operating System / Platform => Windows 64 Bit
- Compiler => mingw32
##### Detailed description
The symbol ""scr1"" is already defined in the mingw-w64 runtime package in ""mingw-w64\x86_64-8.1.0-posix-seh-rt_v6-rev0\mingw64\x86_64-w64-mingw32\include\dlgs.h"" (included through ""windows.h"" for example).
##### Steps to reproduce
The compilation fails.
##### Several ways to solve this issue
1. The ugliest one would be to add at the beginning of this file:
```
#if defined scr1
# undef scr1
#endif // scr1
```
2. A better one should be to avoid including the ""windows.h"" file before ""color_yuv.simd.hpp""
3. An alternative could be the renaming of variables.
The guilty mingw32 ""dlgs.h"" file:
[dlgs_h.txt](https://github.com/opencv/opencv/files/4776326/dlgs_h.txt)
",1.0,"[imgproc] Compilation error caused by color_yuv.simd.hpp - ##### System information (version)
- OpenCV => 4.4.0
- Operating System / Platform => Windows 64 Bit
- Compiler => mingw32
##### Detailed description
The symbol ""scr1"" is already defined in the mingw-w64 runtime package in ""mingw-w64\x86_64-8.1.0-posix-seh-rt_v6-rev0\mingw64\x86_64-w64-mingw32\include\dlgs.h"" (included through ""windows.h"" for example).
##### Steps to reproduce
The compilation fails.
##### Several ways to solve this issue
1. The ugliest one would be to add at the beginning of this file:
```
#if defined scr1
# undef scr1
#endif // scr1
```
2. A better one should be to avoid including the ""windows.h"" file before ""color_yuv.simd.hpp""
3. An alternative could be the renaming of variables.
The guilty mingw32 ""dlgs.h"" file:
[dlgs_h.txt](https://github.com/opencv/opencv/files/4776326/dlgs_h.txt)
",0, compilation error caused by color yuv simd hpp system information version opencv operating system platform windows bit compiler detailed description the symbol is already defined in the mingw runtime package in mingw posix seh rt include dlgs h included through windows h for example steps to reproduce the compilation fails several ways to solve this issue the ugliest one would be to add at the beginning of this file if defined undef endif a better one should be to avoid including the windows h file before color yuv simd hpp an alternative could be the renaming of variables the guilty dlgs h file ,0
222673,7435048440.0,IssuesEvent,2018-03-26 13:08:06,CS2103JAN2018-T15-B4/main,https://api.github.com/repos/CS2103JAN2018-T15-B4/main,opened,"As an eventful business, I would like a calendar with a list of events to keep track of.",priority.high type.story,This is a build-up on the calendar view feature.,1.0,"As an eventful business, I would like a calendar with a list of events to keep track of. - This is a build-up on the calendar view feature.",0,as an eventful business i would like a calendar with a list of events to keep track of this is a build up on the calendar view feature ,0
6688,23739785764.0,IssuesEvent,2022-08-31 11:23:42,nf-core/tools,https://api.github.com/repos/nf-core/tools,closed,Update PyPI Deployment GHA workflow,automation,"https://github.com/nf-core/tools/actions/runs/2956647970
[build-n-publish: # >> PyPA publish to PyPI GHA: UNSUPPORTED GITHUB ACTION VERSION <<#L1](https://github.com/nf-core/tools/commit/cd0ac0b5ee826304a0fd77792593735dd8fc2e58#annotation_4454551978)
You are using ""pypa/gh-action-pypi-publish@master"". The ""master"" branch of this project has been sunset and will not receive any updates, not even security bug fixes. Please, make sure to use a supported version. If you want to pin to v1 major version, use ""pypa/gh-action-pypi-publish@release/v1"". If you feel adventurous, you may opt to use use ""pypa/gh-action-pypi-publish@unstable/v1"" instead. A more general recommendation is to pin to exact tags or commit shas.",1.0,"Update PyPI Deployment GHA workflow - https://github.com/nf-core/tools/actions/runs/2956647970
[build-n-publish: # >> PyPA publish to PyPI GHA: UNSUPPORTED GITHUB ACTION VERSION <<#L1](https://github.com/nf-core/tools/commit/cd0ac0b5ee826304a0fd77792593735dd8fc2e58#annotation_4454551978)
You are using ""pypa/gh-action-pypi-publish@master"". The ""master"" branch of this project has been sunset and will not receive any updates, not even security bug fixes. Please, make sure to use a supported version. If you want to pin to v1 major version, use ""pypa/gh-action-pypi-publish@release/v1"". If you feel adventurous, you may opt to use use ""pypa/gh-action-pypi-publish@unstable/v1"" instead. A more general recommendation is to pin to exact tags or commit shas.",1,update pypi deployment gha workflow you are using pypa gh action pypi publish master the master branch of this project has been sunset and will not receive any updates not even security bug fixes please make sure to use a supported version if you want to pin to major version use pypa gh action pypi publish release if you feel adventurous you may opt to use use pypa gh action pypi publish unstable instead a more general recommendation is to pin to exact tags or commit shas ,1
814343,30503671063.0,IssuesEvent,2023-07-18 15:24:05,arkedge/c2a-core,https://api.github.com/repos/arkedge/c2a-core,opened,"minimum_user, 2nd_obc_user の rename",priority::high,"## 詳細
- わかりにくい
- 2nd のように,先頭数字は使いにくい
## close条件
- [ ] ディレクトリやドキュメントのりネーム
- [ ] 各所接頭語などのりネーム
",1.0,"minimum_user, 2nd_obc_user の rename - ## 詳細
- わかりにくい
- 2nd のように,先頭数字は使いにくい
## close条件
- [ ] ディレクトリやドキュメントのりネーム
- [ ] 各所接頭語などのりネーム
",0,minimum user obc user の rename 詳細 わかりにくい のように,先頭数字は使いにくい close条件 ディレクトリやドキュメントのりネーム 各所接頭語などのりネーム ,0
751729,26255098851.0,IssuesEvent,2023-01-05 23:30:12,kubernetes/kubernetes,https://api.github.com/repos/kubernetes/kubernetes,closed,"CEL: Invalid value: ""object"": internal error: runtime error: index out of range [3] with length 3 evaluating rule: ",kind/bug priority/important-soon sig/api-machinery triage/accepted,"### What happened?
I'm seeing this error when posting an update to the kubernetes API:
`Invalid value: ""object"": internal error: runtime error: index out of range [3] with length 3 evaluating rule: `
### What did you expect to happen?
No error
### How can we reproduce it (as minimally and precisely as possible)?
https://github.com/inteon/CEL_bug
### Anything else we need to know?
_No response_
### Kubernetes version
```console
$ ./kube-apiserver --version
Kubernetes v1.25.0
```
### Cloud provider
### OS version
```console
# On Linux:
$ cat /etc/os-release
# paste output here
$ uname -a
# paste output here
# On Windows:
C:\> wmic os get Caption, Version, BuildNumber, OSArchitecture
# paste output here
```
### Install tools
### Container runtime (CRI) and version (if applicable)
### Related plugins (CNI, CSI, ...) and versions (if applicable)
",1.0,"CEL: Invalid value: ""object"": internal error: runtime error: index out of range [3] with length 3 evaluating rule: - ### What happened?
I'm seeing this error when posting an update to the kubernetes API:
`Invalid value: ""object"": internal error: runtime error: index out of range [3] with length 3 evaluating rule: `
### What did you expect to happen?
No error
### How can we reproduce it (as minimally and precisely as possible)?
https://github.com/inteon/CEL_bug
### Anything else we need to know?
_No response_
### Kubernetes version
```console
$ ./kube-apiserver --version
Kubernetes v1.25.0
```
### Cloud provider
### OS version
```console
# On Linux:
$ cat /etc/os-release
# paste output here
$ uname -a
# paste output here
# On Windows:
C:\> wmic os get Caption, Version, BuildNumber, OSArchitecture
# paste output here
```
### Install tools
### Container runtime (CRI) and version (if applicable)
### Related plugins (CNI, CSI, ...) and versions (if applicable)
",0,cel invalid value object internal error runtime error index out of range with length evaluating rule what happened i m seeing this error when posting an update to the kubernetes api invalid value object internal error runtime error index out of range with length evaluating rule what did you expect to happen no error how can we reproduce it as minimally and precisely as possible anything else we need to know no response kubernetes version console kube apiserver version kubernetes cloud provider os version console on linux cat etc os release paste output here uname a paste output here on windows c wmic os get caption version buildnumber osarchitecture paste output here install tools container runtime cri and version if applicable related plugins cni csi and versions if applicable ,0
4389,16445491124.0,IssuesEvent,2021-05-20 19:02:51,operate-first/SRE,https://api.github.com/repos/operate-first/SRE,closed,[Automation] Should we settle on a single language for automation scripts? Which one?,automation,"From the discussion @HumairAK started in operate-first/apps#635:
> one issue I'm seeing is right now we have a few scripts in bash, one in python (that @larsks has worked on) we will need to narrow down to one language if we are to wrap these, lest we start calling bash / python scripts from golang or something (yuck).",1.0,"[Automation] Should we settle on a single language for automation scripts? Which one? - From the discussion @HumairAK started in operate-first/apps#635:
> one issue I'm seeing is right now we have a few scripts in bash, one in python (that @larsks has worked on) we will need to narrow down to one language if we are to wrap these, lest we start calling bash / python scripts from golang or something (yuck).",1, should we settle on a single language for automation scripts which one from the discussion humairak started in operate first apps one issue i m seeing is right now we have a few scripts in bash one in python that larsks has worked on we will need to narrow down to one language if we are to wrap these lest we start calling bash python scripts from golang or something yuck ,1
2873,12740933520.0,IssuesEvent,2020-06-26 04:23:29,home-assistant/frontend,https://api.github.com/repos/home-assistant/frontend,closed,"When an entity selection list in automations appears at the bottom of the screen, it doesn't work in Firefox and Chrome",editor: automation stale,"
## The problem
When I create an automation, and I go to select the entity, by start typing it, it will shorten the list momentarily, and then jump back to the full list. It only happens when the list is at the bottom of the screen, making it open above the line, if it's at the top, opening down, it seems to work correctly.
## Environment
- Home Assistant release with the issue: 0.105.4
- Last working Home Assistant release (if known): pre-0.100
- Operating environment (Hass.io/Docker/Windows/etc.): hassio
- Browser: Firefox 72.0.2 (and also developer edition), Chrome 80.0.3987.116
- Link to integration documentation on our website:
## Problem-relevant `configuration.yaml`
```yaml
```
## Traceback/Error logs
```txt
```
## Additional information
I did a small 'video' of the problem:
https://www.screencast.com/t/wGrgSIYIt
",1.0,"When an entity selection list in automations appears at the bottom of the screen, it doesn't work in Firefox and Chrome -
## The problem
When I create an automation, and I go to select the entity, by start typing it, it will shorten the list momentarily, and then jump back to the full list. It only happens when the list is at the bottom of the screen, making it open above the line, if it's at the top, opening down, it seems to work correctly.
## Environment
- Home Assistant release with the issue: 0.105.4
- Last working Home Assistant release (if known): pre-0.100
- Operating environment (Hass.io/Docker/Windows/etc.): hassio
- Browser: Firefox 72.0.2 (and also developer edition), Chrome 80.0.3987.116
- Link to integration documentation on our website:
## Problem-relevant `configuration.yaml`
```yaml
```
## Traceback/Error logs
```txt
```
## Additional information
I did a small 'video' of the problem:
https://www.screencast.com/t/wGrgSIYIt
",1,when an entity selection list in automations appears at the bottom of the screen it doesn t work in firefox and chrome read this first if you need additional help with this template please refer to make sure you are running the latest version of home assistant before reporting an issue do not report issues for integrations if you are using custom components or integrations provide as many details as possible paste logs configuration samples and code into the backticks do not delete any text from this template otherwise your issue may be closed without comment the problem describe the issue you are experiencing here to communicate to the maintainers tell us what you were trying to do and what happened instead when i create an automation and i go to select the entity by start typing it it will shorten the list momentarily and then jump back to the full list it only happens when the list is at the bottom of the screen making it open above the line if it s at the top opening down it seems to work correctly environment provide details about the versions you are using which helps us to reproduce and find the issue quicker version information is found in the home assistant frontend developer tools info home assistant release with the issue last working home assistant release if known pre operating environment hass io docker windows etc hassio browser firefox and also developer edition chrome link to integration documentation on our website problem relevant configuration yaml an example configuration that caused the problem for you fill this out even if it seems unimportant to you please be sure to remove personal information like passwords private urls and other credentials yaml traceback error logs if you come across any trace or error logs please provide them txt additional information i did a small video of the problem ,1
6900,24022655640.0,IssuesEvent,2022-09-15 08:56:18,querqy/querqy-opensearch,https://api.github.com/repos/querqy/querqy-opensearch,closed,CI workflows,enhancement automation,"Issue:
Create CI workflows for test automation. This should include building the plugin and running unit/integration tests.
More on Plugin standards [here](https://github.com/opensearch-project/opensearch-plugins/blob/793f21c111a322d3800dcc66fa1c61bdc026c271/STANDARDS.md#ci-workflows
)",1.0,"CI workflows - Issue:
Create CI workflows for test automation. This should include building the plugin and running unit/integration tests.
More on Plugin standards [here](https://github.com/opensearch-project/opensearch-plugins/blob/793f21c111a322d3800dcc66fa1c61bdc026c271/STANDARDS.md#ci-workflows
)",1,ci workflows issue create ci workflows for test automation this should include building the plugin and running unit integration tests more on plugin standards ,1
5103,18674127173.0,IssuesEvent,2021-10-31 08:54:04,Tithibots/tithiwa,https://api.github.com/repos/Tithibots/tithiwa,closed,Create track_online_status() In Chatroom class to track that someone is online or not,enhancement help wanted good first issue python Selenium Automation hacktoberfest,"Just open the chatroom of the given contact and just save online status in some file every second.
maybe use True or False represents online or offline ",1.0,"Create track_online_status() In Chatroom class to track that someone is online or not - Just open the chatroom of the given contact and just save online status in some file every second.
maybe use True or False represents online or offline ",1,create track online status in chatroom class to track that someone is online or not just open the chatroom of the given contact and just save online status in some file every second maybe use true or false represents online or offline ,1
609109,18854230219.0,IssuesEvent,2021-11-12 02:39:38,crypto-com/chain-desktop-wallet,https://api.github.com/repos/crypto-com/chain-desktop-wallet,opened,Problem: Ledger approval in Staking is longer than expected,low priority need-investigation,"## Problem
To confirm a staking transaction, it takes between 20-30 seconds for the transaction to appear on my Ledger for approval, which is longer than normal.",1.0,"Problem: Ledger approval in Staking is longer than expected - ## Problem
To confirm a staking transaction, it takes between 20-30 seconds for the transaction to appear on my Ledger for approval, which is longer than normal.",0,problem ledger approval in staking is longer than expected problem to confirm a staking transaction it takes between seconds for the transaction to appear on my ledger for approval which is longer than normal ,0
318341,27297708865.0,IssuesEvent,2023-02-23 21:58:16,nucleus-security/Test-repo,https://api.github.com/repos/nucleus-security/Test-repo,opened,Nucleus - [High] - 440037,Test,"Source: QUALYS
Finding Description: CentOS has released security update for kernel security update to fix the vulnerabilities. Affected Products: centos 6
Impact: This vulnerability can also be used to cause a complete denial of service and could render the resource completely unavailable.
Target(s): Asset name: 192.168.56.127
IP: 192.168.56.127
Solution: To resolve this issue, upgrade to the latest packages which contain a patch. Refer to CentOS advisory centos 6 (https://lists.centos.org/pipermail/centos-announce/2018-August/022983.html) for updates and patch information.
Patch:
Following are links for downloading patches to fix the vulnerabilities:
CESA-2018:2390: centos 6 (https://lists.centos.org/pipermail/centos-announce/2018-August/022983.html)
References:
QID:440037
CVE:CVE-2018-5390, CVE-2018-3620, CVE-2018-3646, CVE-2018-3693, CVE-2018-10901, CVE-2017-0861, CVE-2017-15265, CVE-2018-7566, CVE-2018-1000004
Category:CentOS
PCI Flagged:yes
Vendor References:CESA-2018:2390 centos 6
Bugtraq IDs:104976, 104905, 105080, 101288, 103605, 104606, 102329
Severity: High
Date Discovered: 2022-11-12 08:04:44
Nucleus Notification Rules Triggered: Rule GitHub
Project Name: 6716
Please see Nucleus for more information on these vulnerabilities:https://192.168.56.101/nucleus/public/app/index.html#vuln/201000007/NDQwMDM3/UVVBTFlT/VnVsbg--/false/MjAxMDAwMDA3/c3VtbWFyeQ--/false",1.0,"Nucleus - [High] - 440037 - Source: QUALYS
Finding Description: CentOS has released security update for kernel security update to fix the vulnerabilities. Affected Products: centos 6
Impact: This vulnerability can also be used to cause a complete denial of service and could render the resource completely unavailable.
Target(s): Asset name: 192.168.56.127
IP: 192.168.56.127
Solution: To resolve this issue, upgrade to the latest packages which contain a patch. Refer to CentOS advisory centos 6 (https://lists.centos.org/pipermail/centos-announce/2018-August/022983.html) for updates and patch information.
Patch:
Following are links for downloading patches to fix the vulnerabilities:
CESA-2018:2390: centos 6 (https://lists.centos.org/pipermail/centos-announce/2018-August/022983.html)
References:
QID:440037
CVE:CVE-2018-5390, CVE-2018-3620, CVE-2018-3646, CVE-2018-3693, CVE-2018-10901, CVE-2017-0861, CVE-2017-15265, CVE-2018-7566, CVE-2018-1000004
Category:CentOS
PCI Flagged:yes
Vendor References:CESA-2018:2390 centos 6
Bugtraq IDs:104976, 104905, 105080, 101288, 103605, 104606, 102329
Severity: High
Date Discovered: 2022-11-12 08:04:44
Nucleus Notification Rules Triggered: Rule GitHub
Project Name: 6716
Please see Nucleus for more information on these vulnerabilities:https://192.168.56.101/nucleus/public/app/index.html#vuln/201000007/NDQwMDM3/UVVBTFlT/VnVsbg--/false/MjAxMDAwMDA3/c3VtbWFyeQ--/false",0,nucleus source qualys finding description centos has released security update for kernel security update to fix the vulnerabilities affected products centos impact this vulnerability can also be used to cause a complete denial of service and could render the resource completely unavailable target s asset name ip solution to resolve this issue upgrade to the latest packages which contain a patch refer to centos advisory centos for updates and patch information patch following are links for downloading patches to fix the vulnerabilities cesa centos references qid cve cve cve cve cve cve cve cve cve cve category centos pci flagged yes vendor references cesa centos bugtraq ids severity high date discovered nucleus notification rules triggered rule github project name please see nucleus for more information on these vulnerabilities ,0
3932,14993640657.0,IssuesEvent,2021-01-29 11:39:02,wazuh/wazuh-qa,https://api.github.com/repos/wazuh/wazuh-qa,closed,Vuln detector: Add new tests to check the downloaded feed format,automation core/vuln detector,"Hello team,
The purpose of this issue is to add a new test to check that the downloaded feeds have the expected format. For that, it will be downloaded the Redhat, Canonical, Ubuntu, and NVD feeds and then will check that they will have the following formats:
|Feed | Format |
|--------|-----------|
| Redhat | JSON|
| Canonical | Bzip --> XML|
|Debian | XML|
|NVD| JSON|
Best regards",1.0,"Vuln detector: Add new tests to check the downloaded feed format - Hello team,
The purpose of this issue is to add a new test to check that the downloaded feeds have the expected format. For that, it will be downloaded the Redhat, Canonical, Ubuntu, and NVD feeds and then will check that they will have the following formats:
|Feed | Format |
|--------|-----------|
| Redhat | JSON|
| Canonical | Bzip --> XML|
|Debian | XML|
|NVD| JSON|
Best regards",1,vuln detector add new tests to check the downloaded feed format hello team the purpose of this issue is to add a new test to check that the downloaded feeds have the expected format for that it will be downloaded the redhat canonical ubuntu and nvd feeds and then will check that they will have the following formats feed format redhat json canonical bzip xml debian xml nvd json best regards,1
258460,19561337186.0,IssuesEvent,2022-01-03 16:36:26,platten/enarx-test-wasmldr,https://api.github.com/repos/platten/enarx-test-wasmldr,opened,Document/enforce minimum Rust toolchain version,documentation good first issue," **Issue by [connorkuehl](https://github.com/connorkuehl)**
_Tue Sep 15 17:15:56 2020_
----
`enarx-wasmldr` will be most often used in conjunction with `enarx-keepldr`. That is, `enarx-wasmldr` must be compiled as a static PIE binary for `musl`.
I believe the most recent Rust toolchain we depend on is [version 1.46.0 (2020-08-27)](https://github.com/rust-lang/rust/blob/master/RELEASES.md#version-1460-2020-08-27). That release enabled static PIE binaries for `musl`.
",1.0,"Document/enforce minimum Rust toolchain version - **Issue by [connorkuehl](https://github.com/connorkuehl)**
_Tue Sep 15 17:15:56 2020_
----
`enarx-wasmldr` will be most often used in conjunction with `enarx-keepldr`. That is, `enarx-wasmldr` must be compiled as a static PIE binary for `musl`.
I believe the most recent Rust toolchain we depend on is [version 1.46.0 (2020-08-27)](https://github.com/rust-lang/rust/blob/master/RELEASES.md#version-1460-2020-08-27). That release enabled static PIE binaries for `musl`.
",0,document enforce minimum rust toolchain version issue by tue sep enarx wasmldr will be most often used in conjunction with enarx keepldr that is enarx wasmldr must be compiled as a static pie binary for musl i believe the most recent rust toolchain we depend on is that release enabled static pie binaries for musl ,0
7003,24110925778.0,IssuesEvent,2022-09-20 11:16:51,mozilla-mobile/firefox-ios,https://api.github.com/repos/mozilla-mobile/firefox-ios,opened,[XCTests] Tabs Perf tests: Pre-loaded tabs are not shown,eng:automation,"There is the [TabsPerformanceTest](https://github.com/mozilla-mobile/firefox-ios/blob/main/Tests/XCUITests/TabsPerformanceTests.swift) suite where we are launching the app and the tab tray with different number of tabs, from 1 to 1200.
To run these tests we created some `.archive`[ files containing the tabs](https://github.com/mozilla-mobile/firefox-ios/blob/2887a4eede37591f5448c0030462ffbdc3f4c513/Tests/XCUITests/TabsPerformanceTests.swift#L5).
Unfortunately after this commit 6a0b5048121eb1690d82fa2b13235af817857e67 the tabs are not shown in the simulator.
I see changes in the `TabManagerStore` which will be likely the cause of this.
@lmarceau (as author of that commit ;)) I'm afraid we need your help to fix this :( ",1.0,"[XCTests] Tabs Perf tests: Pre-loaded tabs are not shown - There is the [TabsPerformanceTest](https://github.com/mozilla-mobile/firefox-ios/blob/main/Tests/XCUITests/TabsPerformanceTests.swift) suite where we are launching the app and the tab tray with different number of tabs, from 1 to 1200.
To run these tests we created some `.archive`[ files containing the tabs](https://github.com/mozilla-mobile/firefox-ios/blob/2887a4eede37591f5448c0030462ffbdc3f4c513/Tests/XCUITests/TabsPerformanceTests.swift#L5).
Unfortunately after this commit 6a0b5048121eb1690d82fa2b13235af817857e67 the tabs are not shown in the simulator.
I see changes in the `TabManagerStore` which will be likely the cause of this.
@lmarceau (as author of that commit ;)) I'm afraid we need your help to fix this :( ",1, tabs perf tests pre loaded tabs are not shown there is the suite where we are launching the app and the tab tray with different number of tabs from to to run these tests we created some archive unfortunately after this commit the tabs are not shown in the simulator i see changes in the tabmanagerstore which will be likely the cause of this lmarceau as author of that commit i m afraid we need your help to fix this ,1
17880,6522804624.0,IssuesEvent,2017-08-29 05:21:12,drashti4/localisationofschool,https://api.github.com/repos/drashti4/localisationofschool,closed,Website Designing Changes,help wanted website building,"
To-do list for this week
- Where to accommodate new changes (Resource section)
- Design changes in website
- Auditing/Changing current CSS
Suggest your change by creating issue [How to create issue](https://help.github.com/articles/creating-an-issue/) Related Issue [#1](https://github.com/drashti4/localisationofschool/issues/1)
PS - Take a look at current status [website](https://drashti4.github.io/local-web/)",1.0,"Website Designing Changes -
To-do list for this week
- Where to accommodate new changes (Resource section)
- Design changes in website
- Auditing/Changing current CSS
Suggest your change by creating issue [How to create issue](https://help.github.com/articles/creating-an-issue/) Related Issue [#1](https://github.com/drashti4/localisationofschool/issues/1)
PS - Take a look at current status [website](https://drashti4.github.io/local-web/)",0,website designing changes to do list for this week where to accommodate new changes resource section design changes in website auditing changing current css suggest your change by creating issue related issue ps take a look at current status ,0
104018,13020507789.0,IssuesEvent,2020-07-27 03:19:43,lazerwalker/azure-mud,https://api.github.com/repos/lazerwalker/azure-mud,opened,Should we require real names?,design discussion,"In favor of real names:
* Helps tie things back to being a real conference with real people
* Makes it easier for us to tie CoC violations back to real people
* Feels perhaps more ""grown-up"" than just handles/usernames
Against real names:
* Some people, who are not trolls, might prefer to be pseudonymous, and that's totally valid.
* Needs to be worded correctly to make it clear we're not asking for a legal name.
* Yet another thing to ask people. Less info is better!
I'm currently mildly against.",1.0,"Should we require real names? - In favor of real names:
* Helps tie things back to being a real conference with real people
* Makes it easier for us to tie CoC violations back to real people
* Feels perhaps more ""grown-up"" than just handles/usernames
Against real names:
* Some people, who are not trolls, might prefer to be pseudonymous, and that's totally valid.
* Needs to be worded correctly to make it clear we're not asking for a legal name.
* Yet another thing to ask people. Less info is better!
I'm currently mildly against.",0,should we require real names in favor of real names helps tie things back to being a real conference with real people makes it easier for us to tie coc violations back to real people feels perhaps more grown up than just handles usernames against real names some people who are not trolls might prefer to be pseudonymous and that s totally valid needs to be worded correctly to make it clear we re not asking for a legal name yet another thing to ask people less info is better i m currently mildly against ,0
69782,9333306775.0,IssuesEvent,2019-03-28 14:10:07,conosco/conosco-core,https://api.github.com/repos/conosco/conosco-core,opened,Documento de EAP,0 - Development Team 0 - Product Owner 0 - Scrum Master 1 - Product 1 - Techinical Viability 2 - Documentation 5 - Advanced,"# TN° - Documento de EAP
---
### Descrição:
Elaboração e produção do documento de Estrutura analítica do Projeto
### Tarefas
Seção para tarefas (tasks) para issues mais complexas.
- [ ] Reunião de idealização do documento
- [ ] Escolher ferramenta para elaboração da eap
- [ ] Produção do documento
### Comentários
",1.0,"Documento de EAP - # TN° - Documento de EAP
---
### Descrição:
Elaboração e produção do documento de Estrutura analítica do Projeto
### Tarefas
Seção para tarefas (tasks) para issues mais complexas.
- [ ] Reunião de idealização do documento
- [ ] Escolher ferramenta para elaboração da eap
- [ ] Produção do documento
### Comentários
",0,documento de eap tn° documento de eap descrição elaboração e produção do documento de estrutura analítica do projeto tarefas seção para tarefas tasks para issues mais complexas reunião de idealização do documento escolher ferramenta para elaboração da eap produção do documento comentários ,0
51353,12705539333.0,IssuesEvent,2020-06-23 04:57:57,xamarin/xamarin-android,https://api.github.com/repos/xamarin/xamarin-android,opened,_error XA0030: Building with JDK version `11.0.7` is not supported_ when attempting to use a JDK 11 version that is a bit too new,Area: App+Library Build,"### Steps to reproduce
1. Download the current version of the _jbrsdk_ JetBrains Runtime that's available on .
Today, that's , OpenJDK version 11.0.7.
2. Extract the files to a directory.
3. Set Xamarin.Android builds to use that JDK 11 directory. For example, open **Tools > Options** in Visual Studio, select the **Xamarin > Android Settings** node, and set **Java Development Kit Location** to that directory.
4. Attempt to build a Xamarin.Android app project.
### Expected behavior
Maybe the build should succeed? That is, maybe the explicit version check for the JDK version number can now use just the major.minor part of the version number instead of the full major.minor.build? It seems like the third place of JDK 11 version numbers (the `Build` number in `System.Version` terminology) changes more frequently than it did for the old 1.8.0 version number scheme of JDK 8.
### Actual behavior
The build fails because the third place of the JDK version number is higher than the current `$(LatestSupportedJavaVersion)`:
```
C:\Program Files (x86)\Microsoft Visual Studio\2019\Preview\MSBuild\Xamarin\Android\Xamarin.Android.Legacy.targets(163,5): error XA0030: Building with JDK version `11.0.7` is not supported. Please install JDK version `11.0.4`. See https://aka.ms/xamarin/jdk9-errors
```
### Version information
Microsoft Visual Studio Enterprise 2019 Int Preview
Version 16.7.0 Preview 4.0 [30222.8.master]
Xamarin.Android SDK 10.4.0.0 (d16-7/de70286)",1.0,"_error XA0030: Building with JDK version `11.0.7` is not supported_ when attempting to use a JDK 11 version that is a bit too new - ### Steps to reproduce
1. Download the current version of the _jbrsdk_ JetBrains Runtime that's available on .
Today, that's , OpenJDK version 11.0.7.
2. Extract the files to a directory.
3. Set Xamarin.Android builds to use that JDK 11 directory. For example, open **Tools > Options** in Visual Studio, select the **Xamarin > Android Settings** node, and set **Java Development Kit Location** to that directory.
4. Attempt to build a Xamarin.Android app project.
### Expected behavior
Maybe the build should succeed? That is, maybe the explicit version check for the JDK version number can now use just the major.minor part of the version number instead of the full major.minor.build? It seems like the third place of JDK 11 version numbers (the `Build` number in `System.Version` terminology) changes more frequently than it did for the old 1.8.0 version number scheme of JDK 8.
### Actual behavior
The build fails because the third place of the JDK version number is higher than the current `$(LatestSupportedJavaVersion)`:
```
C:\Program Files (x86)\Microsoft Visual Studio\2019\Preview\MSBuild\Xamarin\Android\Xamarin.Android.Legacy.targets(163,5): error XA0030: Building with JDK version `11.0.7` is not supported. Please install JDK version `11.0.4`. See https://aka.ms/xamarin/jdk9-errors
```
### Version information
Microsoft Visual Studio Enterprise 2019 Int Preview
Version 16.7.0 Preview 4.0 [30222.8.master]
Xamarin.Android SDK 10.4.0.0 (d16-7/de70286)",0, error building with jdk version is not supported when attempting to use a jdk version that is a bit too new steps to reproduce download the current version of the jbrsdk jetbrains runtime that s available on today that s openjdk version extract the files to a directory set xamarin android builds to use that jdk directory for example open tools options in visual studio select the xamarin android settings node and set java development kit location to that directory attempt to build a xamarin android app project expected behavior maybe the build should succeed that is maybe the explicit version check for the jdk version number can now use just the major minor part of the version number instead of the full major minor build it seems like the third place of jdk version numbers the build number in system version terminology changes more frequently than it did for the old version number scheme of jdk actual behavior the build fails because the third place of the jdk version number is higher than the current latestsupportedjavaversion c program files microsoft visual studio preview msbuild xamarin android xamarin android legacy targets error building with jdk version is not supported please install jdk version see version information microsoft visual studio enterprise int preview version preview xamarin android sdk ,0
2103,11394427392.0,IssuesEvent,2020-01-30 09:20:16,elastic/package-registry,https://api.github.com/repos/elastic/package-registry,opened,Add Docker image build stage,automation ci,"We have a new namespace in place to publish the Docker images, thus we have to add a new stage to the pipeline job to build and publish the docker images. We will tag the image with the latest version plus the word SNAPSHOT, something like `docker.elastic.co/package-registry/package-registry:0.2.0-SNAPSHOT` also we store the same image with a tag referenced the commit, something like `docker.elastic.co/package-registry/package-registry:f999b7a84d977cd19a379f0cec802aa1ef7ca379`, finally, for GitHub tags we will use the GitHub tag and the commit to publish the Docker image, something like `docker.elastic.co/package-registry/package-registry:0.2.0` and `docker.elastic.co/package-registry/package-registry: 928f750f7dace1934dc5a67bfe24eb848ca44be1 `
```
docker build .
docker tag 87e1feeff7c8 push.docker.elastic.co/package-registry/package-registry:0.2.0
docker push push.docker.elastic.co/package-registry/package-registry:0.2.0
```",1.0,"Add Docker image build stage - We have a new namespace in place to publish the Docker images, thus we have to add a new stage to the pipeline job to build and publish the docker images. We will tag the image with the latest version plus the word SNAPSHOT, something like `docker.elastic.co/package-registry/package-registry:0.2.0-SNAPSHOT` also we store the same image with a tag referenced the commit, something like `docker.elastic.co/package-registry/package-registry:f999b7a84d977cd19a379f0cec802aa1ef7ca379`, finally, for GitHub tags we will use the GitHub tag and the commit to publish the Docker image, something like `docker.elastic.co/package-registry/package-registry:0.2.0` and `docker.elastic.co/package-registry/package-registry: 928f750f7dace1934dc5a67bfe24eb848ca44be1 `
```
docker build .
docker tag 87e1feeff7c8 push.docker.elastic.co/package-registry/package-registry:0.2.0
docker push push.docker.elastic.co/package-registry/package-registry:0.2.0
```",1,add docker image build stage we have a new namespace in place to publish the docker images thus we have to add a new stage to the pipeline job to build and publish the docker images we will tag the image with the latest version plus the word snapshot something like docker elastic co package registry package registry snapshot also we store the same image with a tag referenced the commit something like docker elastic co package registry package registry finally for github tags we will use the github tag and the commit to publish the docker image something like docker elastic co package registry package registry and docker elastic co package registry package registry docker build docker tag push docker elastic co package registry package registry docker push push docker elastic co package registry package registry ,1
4000,15113860147.0,IssuesEvent,2021-02-09 00:33:56,BCDevOps/OpenShift4-RollOut,https://api.github.com/repos/BCDevOps/OpenShift4-RollOut,closed,Create pruning tools,team/DXC tech/automation,"**Describe the issue**
An OpenShift cluster needs regular pruning to keep healthy. This includes the image registry soft and hard prune, as well as old builds and deployments.
https://docs.openshift.com/container-platform/4.4/applications/pruning-objects.html
Develop the needed CronJobs or other tooling to effect the pruning of the cluster
**Definition of done Checklist (where applicable)**
- [x] Image Registry Soft Prune
- [ ] Image Registry Hard Prune
- [x] Deployments Prune
- [x] Builds Prune",1.0,"Create pruning tools - **Describe the issue**
An OpenShift cluster needs regular pruning to keep healthy. This includes the image registry soft and hard prune, as well as old builds and deployments.
https://docs.openshift.com/container-platform/4.4/applications/pruning-objects.html
Develop the needed CronJobs or other tooling to effect the pruning of the cluster
**Definition of done Checklist (where applicable)**
- [x] Image Registry Soft Prune
- [ ] Image Registry Hard Prune
- [x] Deployments Prune
- [x] Builds Prune",1,create pruning tools describe the issue an openshift cluster needs regular pruning to keep healthy this includes the image registry soft and hard prune as well as old builds and deployments develop the needed cronjobs or other tooling to effect the pruning of the cluster definition of done checklist where applicable image registry soft prune image registry hard prune deployments prune builds prune,1
234692,7725005220.0,IssuesEvent,2018-05-24 16:35:32,InFact-coop/BlueCross,https://api.github.com/repos/InFact-coop/BlueCross,closed,Postcode validation,T2h T4h bug enhancement priority-2,"**Problem**: Small letters and whitespace padding in the postcode field will prevent you from submitting the form
**Cause**: The regex we use might be a bit overzealous
**Possible solution 1**: Change the regex to allow for both uppercase and lowercase letters. Use elm's controlled input to strip whitespace from the beginning and end of the postcode using [String.trim](http://package.elm-lang.org/packages/elm-lang/core/latest/String#trim).
**Possible solution 2**: Implement on-the-fly postcode checking using [postcode.io's open source API](https://postcodes.io/docs) as vouched for by @lucymk.",1.0,"Postcode validation - **Problem**: Small letters and whitespace padding in the postcode field will prevent you from submitting the form
**Cause**: The regex we use might be a bit overzealous
**Possible solution 1**: Change the regex to allow for both uppercase and lowercase letters. Use elm's controlled input to strip whitespace from the beginning and end of the postcode using [String.trim](http://package.elm-lang.org/packages/elm-lang/core/latest/String#trim).
**Possible solution 2**: Implement on-the-fly postcode checking using [postcode.io's open source API](https://postcodes.io/docs) as vouched for by @lucymk.",0,postcode validation problem small letters and whitespace padding in the postcode field will prevent you from submitting the form cause the regex we use might be a bit overzealous possible solution change the regex to allow for both uppercase and lowercase letters use elm s controlled input to strip whitespace from the beginning and end of the postcode using possible solution implement on the fly postcode checking using as vouched for by lucymk ,0
9409,28240883789.0,IssuesEvent,2023-04-06 07:03:04,camunda/issues,https://api.github.com/repos/camunda/issues,opened,Support for Spring Boot 3.x,public kind:epic component:c7-automation-platform riskAssessment:pending,"### Value Proposition Statement
Allow user to use maintained environment and benefit from new features
### User Problem
- Spring Boot 2.7 is out of maintenance by 11/2023
### User Stories
- As Software Developer I want to use Spring Boot 3.0 for my Camunda Engine and External Task Clients.
### Implementation Notes
- https://github.com/camunda/camunda-bpm-platform/issues/2755
- Team issue: https://github.com/orgs/camunda/projects/44/views/1?pane=issue&itemId=14493705
:robot: This issue is automatically synced from: [source](https://github.com/camunda/product-hub/issues/1100)",1.0,"Support for Spring Boot 3.x - ### Value Proposition Statement
Allow user to use maintained environment and benefit from new features
### User Problem
- Spring Boot 2.7 is out of maintenance by 11/2023
### User Stories
- As Software Developer I want to use Spring Boot 3.0 for my Camunda Engine and External Task Clients.
### Implementation Notes
- https://github.com/camunda/camunda-bpm-platform/issues/2755
- Team issue: https://github.com/orgs/camunda/projects/44/views/1?pane=issue&itemId=14493705
:robot: This issue is automatically synced from: [source](https://github.com/camunda/product-hub/issues/1100)",1,support for spring boot x value proposition statement allow user to use maintained environment and benefit from new features user problem spring boot is out of maintenance by user stories as software developer i want to use spring boot for my camunda engine and external task clients implementation notes team issue robot this issue is automatically synced from ,1
219,4768840865.0,IssuesEvent,2016-10-26 10:27:18,DevExpress/testcafe,https://api.github.com/repos/DevExpress/testcafe,closed,Don't scroll to parent element while focusing on non-focusable element during click automation,AREA: client SYSTEM: automations TYPE: bug,"### Are you requesting a feature or reporting a bug?
bug
### What is the current behavior?
If click automation is performing on element, which isn't focusable but it parent does, it scrolls to the parent element
### What is the expected behavior?
It must not scroll to focusable parent element, cause it isn't happening when we performs click natively (without TestCafe).
### How would you reproduce the current behavior (if this is a bug)?
Run the test
#### Provide the test code and the tested page URL (if applicable)
Tested page URL:
HTML markup:
```html
Title
```
Test code
```js
import { expect } from 'chai';
import { ClientFunction } from 'testcafe';
fixture `bug`
.page `index.html`;
const getWindowTopScroll = ClientFunction(() => window.pageYOffset);
test(""Shouldn't scroll to the parent"", async t => {
const oldWindowScrollValue = await getWindowTopScroll();
await t.click('#child');
const newWindowScrollValue = await getWindowTopScroll();
expect(newWindowScrollValue).eql(oldWindowScrollValue);
});
```
### Specify your
* operating system: WIN 10 x64
* testcafe version: 0.10.0-alpha
* node.js version: v5.7.0",1.0,"Don't scroll to parent element while focusing on non-focusable element during click automation - ### Are you requesting a feature or reporting a bug?
bug
### What is the current behavior?
If click automation is performing on element, which isn't focusable but it parent does, it scrolls to the parent element
### What is the expected behavior?
It must not scroll to focusable parent element, cause it isn't happening when we performs click natively (without TestCafe).
### How would you reproduce the current behavior (if this is a bug)?
Run the test
#### Provide the test code and the tested page URL (if applicable)
Tested page URL:
HTML markup:
```html
Title
```
Test code
```js
import { expect } from 'chai';
import { ClientFunction } from 'testcafe';
fixture `bug`
.page `index.html`;
const getWindowTopScroll = ClientFunction(() => window.pageYOffset);
test(""Shouldn't scroll to the parent"", async t => {
const oldWindowScrollValue = await getWindowTopScroll();
await t.click('#child');
const newWindowScrollValue = await getWindowTopScroll();
expect(newWindowScrollValue).eql(oldWindowScrollValue);
});
```
### Specify your
* operating system: WIN 10 x64
* testcafe version: 0.10.0-alpha
* node.js version: v5.7.0",1,don t scroll to parent element while focusing on non focusable element during click automation are you requesting a feature or reporting a bug bug what is the current behavior if click automation is performing on element which isn t focusable but it parent does it scrolls to the parent element what is the expected behavior it must not scroll to focusable parent element cause it isn t happening when we performs click natively without testcafe how would you reproduce the current behavior if this is a bug run the test provide the test code and the tested page url if applicable tested page url html markup html title test code js import expect from chai import clientfunction from testcafe fixture bug page index html const getwindowtopscroll clientfunction window pageyoffset test shouldn t scroll to the parent async t const oldwindowscrollvalue await getwindowtopscroll await t click child const newwindowscrollvalue await getwindowtopscroll expect newwindowscrollvalue eql oldwindowscrollvalue specify your operating system win testcafe version alpha node js version ,1
271244,23592906182.0,IssuesEvent,2022-08-23 16:38:07,cockroachdb/cockroach,https://api.github.com/repos/cockroachdb/cockroach,closed,sql/tests: TestRandomSyntaxSQLSmith failed,C-test-failure O-robot branch-master T-sql-experience,"sql/tests.TestRandomSyntaxSQLSmith [failed](https://teamcity.cockroachdb.com/buildConfiguration/Cockroach_Nightlies_RandomSyntaxTestsBazel/5633782?buildTab=log) with [artifacts](https://teamcity.cockroachdb.com/buildConfiguration/Cockroach_Nightlies_RandomSyntaxTestsBazel/5633782?buildTab=artifacts#/) on master @ [3c9b17113488d2ee6929936aa6ec48396f3ed71c](https://github.com/cockroachdb/cockroach/commits/3c9b17113488d2ee6929936aa6ec48396f3ed71c):
Random syntax error:
```
rsg_test.go:784: Crash detected: server panic: pq: internal error: lookup for ComparisonExpr ((col_56906)[void] IS DISTINCT FROM (NULL)[unknown])[bool]'s CmpOp failed
```
Query:
```
WITH
with_9404 (col_56900)
AS (
SELECT
*
FROM
(
VALUES
(NULL),
((-0.9320566654205322):::FLOAT8),
((SELECT (-1.7138859033584595):::FLOAT8 AS col_56899 LIMIT 1:::INT8))
)
AS tab_23172 (col_56900)
),
with_9407 (col_56906)
AS (
SELECT
*
FROM
(
VALUES
('':::VOID),
(
(
WITH
with_9405 (col_56901)
AS (
SELECT
*
FROM
(
VALUES
('85007e11-427f-4a36-8d6e-eb4af6af4db5':::UUID),
('33c4f6f8-2600-4c85-a675-341407509889':::UUID),
('076d8edb-4ee3-4db4-82ca-225c6c455386':::UUID),
('fec11024-a8cf-4100-bdd1-deda3ec3de80':::UUID)
)
AS tab_23173 (col_56901)
),
with_9406 (col_56902, col_56903, col_56904)
AS (
SELECT
tab_23175.col1_0 AS col_56902,
'morning':::greeting AS col_56903,
B'1000110000101110011001011100001000010100' AS col_56904
FROM
defaultdb.public.table1@table1_col1_14_idx AS tab_23174
JOIN defaultdb.public.table1@[0] AS tab_23175 ON
(tab_23174.col1_14) = (tab_23175.col1_14)
AND (tab_23174.col1_16) = (tab_23175.col1_16)
AND (tab_23174.col1_9) = (tab_23175.col1_9)
WHERE
false
GROUP BY
tab_23175.col1_0
HAVING
every(tab_23174.col1_1::BOOL)::BOOL
)
SELECT
'':::VOID AS col_56905
FROM
defaultdb.public.table1@[0] AS tab_23176
LIMIT
1:::INT8
)
),
(NULL),
('':::VOID),
('':::VOID)
)
AS tab_23177 (col_56906)
EXCEPT ALL SELECT * FROM (VALUES ('':::VOID)) AS tab_23178 (col_56907)
)
SELECT
COALESCE(cte_ref_2751.col_56906, '':::VOID) AS col_56908
FROM
with_9407 AS cte_ref_2751
ORDER BY
cte_ref_2751.col_56906 DESC, cte_ref_2751.col_56906;
```
Help
See also: [How To Investigate a Go Test Failure \(internal\)](https://cockroachlabs.atlassian.net/l/c/HgfXfJgM)
/cc @cockroachdb/sql-experience
[This test on roachdash](https://roachdash.crdb.dev/?filter=status:open%20t:.*TestRandomSyntaxSQLSmith.*&sort=title+created&display=lastcommented+project) | [Improve this report!](https://github.com/cockroachdb/cockroach/tree/master/pkg/cmd/internal/issues)
Jira issue: CRDB-17238",1.0,"sql/tests: TestRandomSyntaxSQLSmith failed - sql/tests.TestRandomSyntaxSQLSmith [failed](https://teamcity.cockroachdb.com/buildConfiguration/Cockroach_Nightlies_RandomSyntaxTestsBazel/5633782?buildTab=log) with [artifacts](https://teamcity.cockroachdb.com/buildConfiguration/Cockroach_Nightlies_RandomSyntaxTestsBazel/5633782?buildTab=artifacts#/) on master @ [3c9b17113488d2ee6929936aa6ec48396f3ed71c](https://github.com/cockroachdb/cockroach/commits/3c9b17113488d2ee6929936aa6ec48396f3ed71c):
Random syntax error:
```
rsg_test.go:784: Crash detected: server panic: pq: internal error: lookup for ComparisonExpr ((col_56906)[void] IS DISTINCT FROM (NULL)[unknown])[bool]'s CmpOp failed
```
Query:
```
WITH
with_9404 (col_56900)
AS (
SELECT
*
FROM
(
VALUES
(NULL),
((-0.9320566654205322):::FLOAT8),
((SELECT (-1.7138859033584595):::FLOAT8 AS col_56899 LIMIT 1:::INT8))
)
AS tab_23172 (col_56900)
),
with_9407 (col_56906)
AS (
SELECT
*
FROM
(
VALUES
('':::VOID),
(
(
WITH
with_9405 (col_56901)
AS (
SELECT
*
FROM
(
VALUES
('85007e11-427f-4a36-8d6e-eb4af6af4db5':::UUID),
('33c4f6f8-2600-4c85-a675-341407509889':::UUID),
('076d8edb-4ee3-4db4-82ca-225c6c455386':::UUID),
('fec11024-a8cf-4100-bdd1-deda3ec3de80':::UUID)
)
AS tab_23173 (col_56901)
),
with_9406 (col_56902, col_56903, col_56904)
AS (
SELECT
tab_23175.col1_0 AS col_56902,
'morning':::greeting AS col_56903,
B'1000110000101110011001011100001000010100' AS col_56904
FROM
defaultdb.public.table1@table1_col1_14_idx AS tab_23174
JOIN defaultdb.public.table1@[0] AS tab_23175 ON
(tab_23174.col1_14) = (tab_23175.col1_14)
AND (tab_23174.col1_16) = (tab_23175.col1_16)
AND (tab_23174.col1_9) = (tab_23175.col1_9)
WHERE
false
GROUP BY
tab_23175.col1_0
HAVING
every(tab_23174.col1_1::BOOL)::BOOL
)
SELECT
'':::VOID AS col_56905
FROM
defaultdb.public.table1@[0] AS tab_23176
LIMIT
1:::INT8
)
),
(NULL),
('':::VOID),
('':::VOID)
)
AS tab_23177 (col_56906)
EXCEPT ALL SELECT * FROM (VALUES ('':::VOID)) AS tab_23178 (col_56907)
)
SELECT
COALESCE(cte_ref_2751.col_56906, '':::VOID) AS col_56908
FROM
with_9407 AS cte_ref_2751
ORDER BY
cte_ref_2751.col_56906 DESC, cte_ref_2751.col_56906;
```
Help
See also: [How To Investigate a Go Test Failure \(internal\)](https://cockroachlabs.atlassian.net/l/c/HgfXfJgM)
/cc @cockroachdb/sql-experience
[This test on roachdash](https://roachdash.crdb.dev/?filter=status:open%20t:.*TestRandomSyntaxSQLSmith.*&sort=title+created&display=lastcommented+project) | [Improve this report!](https://github.com/cockroachdb/cockroach/tree/master/pkg/cmd/internal/issues)
Jira issue: CRDB-17238",0,sql tests testrandomsyntaxsqlsmith failed sql tests testrandomsyntaxsqlsmith with on master random syntax error rsg test go crash detected server panic pq internal error lookup for comparisonexpr col is distinct from null s cmpop failed query with with col as select from values null select as col limit as tab col with col as select from values void with with col as select from values uuid uuid uuid uuid as tab col with col col col as select tab as col morning greeting as col b as col from defaultdb public idx as tab join defaultdb public as tab on tab tab and tab tab and tab tab where false group by tab having every tab bool bool select void as col from defaultdb public as tab limit null void void as tab col except all select from values void as tab col select coalesce cte ref col void as col from with as cte ref order by cte ref col desc cte ref col help see also same failure on other branches sql tests testrandomsyntaxsqlsmith failed sql tests testrandomsyntaxsqlsmith failed cc cockroachdb sql experience jira issue crdb ,0
622,7582804739.0,IssuesEvent,2018-04-25 06:33:49,vmware/harbor,https://api.github.com/repos/vmware/harbor,closed,Can not push signed image for notary signer is restarting,kind/automation-found kind/bug priority/high,"Version v1.5.0-a342c31a
Can not push signed image for notary signer is restarting
{""level"":""fatal"",""msg"":""Could not read config at :/etc/notary/signer-config.json, viper error: open /etc/notary/signer-config.json: permission denied"",""time"":""2018-04-23T06:50:20Z""}
",1.0,"Can not push signed image for notary signer is restarting - Version v1.5.0-a342c31a
Can not push signed image for notary signer is restarting
{""level"":""fatal"",""msg"":""Could not read config at :/etc/notary/signer-config.json, viper error: open /etc/notary/signer-config.json: permission denied"",""time"":""2018-04-23T06:50:20Z""}
",1,can not push signed image for notary signer is restarting version can not push signed image for notary signer is restarting level fatal msg could not read config at etc notary signer config json viper error open etc notary signer config json permission denied time ,1
55993,8038640409.0,IssuesEvent,2018-07-30 15:55:08,choojs/choo,https://api.github.com/repos/choojs/choo,closed,Redirection to another during page load causes the view to not render.,Type: Documentation,"## Expected behavior
I am brand new to Choo so I may be thinking about this all wrong. But the use case is, the User comes to a route that requires authentication. They are not authenticated so they are routed to a login view at a different route.
I have written up a small Choo to make it easier to demostrate the problem. The expected result from the code below is:
> Should ONLY EVER get here.
And the route should be:
> https://localhost:8080/login
### package.json
```js
{
""name"": ""test-choo"",
""version"": ""1.0.0"",
""description"": """",
""main"": ""index.js"",
""scripts"": {
""test"": ""echo \""Error: no test specified\"" && exit 1""
},
""author"": """",
""license"": ""ISC"",
""dependencies"": {
""bankai"": ""^9.14.0"",
""choo"": ""^6.13.0""
}
}
```
### index.js
```js
const choo = require('choo')
const html = require('choo/html')
const app = choo()
app.use((state, emitter) => { state.session = null })
app.route('/', (state, emit) => {
if (!state.session) emit(state.events.REPLACESTATE, '/login')
return html`
`
})
app.mount('body')
```
### Actual behavior
The actual result is:
> Should NEVER get here.
And the route should be:
> https://localhost:8080/login
### Steps to reproduce behavior
Write here.
1. Copy the above files into a folder
2. `npm i`
3. `./node_modules/.bin/bankai start index.js`
4. In a browser open, `https://localhost:8080/`
### Notes
What I noticed when tracing the function calls was that NAVIGATE event emitted and its callback executed.
In the `start` function (which was called after `documentReady` in `mount`), the RENDER event was never emitted because `self._loaded` was false (https://github.com/choojs/choo/blob/master/index.js#L101). After this code is executed, the `documentReady` callback in `start` is called, setting `self._loaded` to true.
So it appears that there is an order of operations issue maybe caused by the `setTimeout` in the `documentReady` call.
I would like to fix this problem (assuming its a problem), but I need a little bit of hand holding since I am only 1 day into Choo. Any guidance would be greatly appreciated.
",1.0,"Redirection to another during page load causes the view to not render. - ## Expected behavior
I am brand new to Choo so I may be thinking about this all wrong. But the use case is, the User comes to a route that requires authentication. They are not authenticated so they are routed to a login view at a different route.
I have written up a small Choo to make it easier to demostrate the problem. The expected result from the code below is:
> Should ONLY EVER get here.
And the route should be:
> https://localhost:8080/login
### package.json
```js
{
""name"": ""test-choo"",
""version"": ""1.0.0"",
""description"": """",
""main"": ""index.js"",
""scripts"": {
""test"": ""echo \""Error: no test specified\"" && exit 1""
},
""author"": """",
""license"": ""ISC"",
""dependencies"": {
""bankai"": ""^9.14.0"",
""choo"": ""^6.13.0""
}
}
```
### index.js
```js
const choo = require('choo')
const html = require('choo/html')
const app = choo()
app.use((state, emitter) => { state.session = null })
app.route('/', (state, emit) => {
if (!state.session) emit(state.events.REPLACESTATE, '/login')
return html`
`
})
app.mount('body')
```
### Actual behavior
The actual result is:
> Should NEVER get here.
And the route should be:
> https://localhost:8080/login
### Steps to reproduce behavior
Write here.
1. Copy the above files into a folder
2. `npm i`
3. `./node_modules/.bin/bankai start index.js`
4. In a browser open, `https://localhost:8080/`
### Notes
What I noticed when tracing the function calls was that NAVIGATE event emitted and its callback executed.
In the `start` function (which was called after `documentReady` in `mount`), the RENDER event was never emitted because `self._loaded` was false (https://github.com/choojs/choo/blob/master/index.js#L101). After this code is executed, the `documentReady` callback in `start` is called, setting `self._loaded` to true.
So it appears that there is an order of operations issue maybe caused by the `setTimeout` in the `documentReady` call.
I would like to fix this problem (assuming its a problem), but I need a little bit of hand holding since I am only 1 day into Choo. Any guidance would be greatly appreciated.
",0,redirection to another during page load causes the view to not render expected behavior i am brand new to choo so i may be thinking about this all wrong but the use case is the user comes to a route that requires authentication they are not authenticated so they are routed to a login view at a different route i have written up a small choo to make it easier to demostrate the problem the expected result from the code below is should only ever get here and the route should be package json js name test choo version description main index js scripts test echo error no test specified exit author license isc dependencies bankai choo index js js const choo require choo const html require choo html const app choo app use state emitter state session null app route state emit if state session emit state events replacestate login return html should never get here app route login state emit return html should only ever get here app mount body actual behavior the actual result is should never get here and the route should be steps to reproduce behavior write here copy the above files into a folder npm i node modules bin bankai start index js in a browser open notes what i noticed when tracing the function calls was that navigate event emitted and its callback executed in the start function which was called after documentready in mount the render event was never emitted because self loaded was false after this code is executed the documentready callback in start is called setting self loaded to true so it appears that there is an order of operations issue maybe caused by the settimeout in the documentready call i would like to fix this problem assuming its a problem but i need a little bit of hand holding since i am only day into choo any guidance would be greatly appreciated ,0
322491,27611620939.0,IssuesEvent,2023-03-09 16:21:11,delph-in/srg,https://api.github.com/repos/delph-in/srg,reopened,Passive voice,mrs testsuite,"Judging by the item 331 in the MRS test suite, passive voice doesn't quite work yet (the subject is not linked to the event in the MRS):
",1.0,"Passive voice - Judging by the item 331 in the MRS test suite, passive voice doesn't quite work yet (the subject is not linked to the event in the MRS):
",0,passive voice judging by the item in the mrs test suite passive voice doesn t quite work yet the subject is not linked to the event in the mrs img width alt screen shot at pm src img width alt screen shot at pm src ,0
4276,15931247478.0,IssuesEvent,2021-04-14 02:54:09,Azure/azure-powershell,https://api.github.com/repos/Azure/azure-powershell,closed,Runbook fails to call cmdlets within Start-Job scriptblock,Accounts Automation customer-reported question,"## Description
I have a runbook which has a Start-Job scriptblock for authenticating and connecting to ADLS so that I can implement a timeout on the connection attempt. I have to implement this as occasionally a runbook will timeout after 3 hours of just trying to run Connect-AzAccounts.
Prior to the Start-Job scripblock approach this runbook was working fine, with the scriptblock it works locally on my PC, however now running as the runbook results in may errors which appear related to not being able to import modules.
Since Start-Job creates a new session I called Import-Module on the 3 required modules but this still isn't working.
## Steps to reproduce
`$TenantId = ""XXXXX""
$ResourceGroupName = ""XXXXX""
$StorageAccountName = ""XXXXX""
$Credential = Get-AutomationPSCredential -Name ""XXXXXXX""
$connectAzTimeout = 30
$connectAzTimer = [system.diagnostics.stopwatch]::StartNew()
$connectAzJob = Start-Job -ScriptBlock {
Import-Module Microsoft.PowerShell.Core -Force
Import-Module Az.Accounts -Force
Import-Module Az.Storage -Force
$connectAz = Connect-AzAccount -Credential $using:Credential -Tenant $using:TenantId -ServicePrincipal
$storageAccount = Get-AzStorageAccount -ResourceGroupName $using:ResourceGroupName -AccountName $using:StorageAccountName
$storageContext = $storageAccount.Context
$storageContext
}
Register-ObjectEvent $connectAzJob -EventName StateChanged -SourceIdentifier ConnectAzJobEnd -Action {
if ($sender.State -eq 'Completed') {
$global:storageContext = Receive-Job $connectAzJob
}
}
while (!$storageContext -And $connectAzTimer.Elapsed.TotalSeconds -le $connectAzTimeout) {
sleep -Seconds 1
}
`
## Environment data
The following AutomationAccount modules are installed in addition to all of the standard Automation Account modules:
Az.Accounts
Az.Storage
## Error output
`Cannot invoke method. Method invocation is supported only on core types in this language mode. + CategoryInfo : InvalidOperation: (:) [], RuntimeException + FullyQualifiedErrorId : MethodInvocationNotSupportedInConstrainedLanguage + PSComputerName : localhost`
`Could not load file or assembly 'Newtonsoft.Json, Version=10.0.0.0, Culture=neutral, PublicKeyToken=30ad4fe6b2a6aeed' or one of its dependencies. The system cannot find the file specified. + CategoryInfo : NotSpecified: (:) [Import-Module], FileNotFoundException + FullyQualifiedErrorId : System.IO.FileNotFoundException,Microsoft.PowerShell.Commands.ImportModuleCommand + PSComputerName : localhost`
`Could not load file or assembly 'Azure.Core, Version=1.9.0.0, Culture=neutral, PublicKeyToken=92742159e12e44c8' or one of its dependencies. The system cannot find the file specified. + CategoryInfo : NotSpecified: (:) [Import-Module], FileNotFoundException + FullyQualifiedErrorId : System.IO.FileNotFoundException,Microsoft.PowerShell.Commands.ImportModuleCommand + PSComputerName : localhost`
`The term 'Connect-AzAccount' is not recognized as the name of a cmdlet, function, script file, or operable program. Check the spelling of the name, or if a path was included, verify that the path is correct and try again. + CategoryInfo : ObjectNotFound: (Connect-AzAccount:String) [], CommandNotFoundException + FullyQualifiedErrorId : CommandNotFoundException + PSComputerName : localhost`
",1.0,"Runbook fails to call cmdlets within Start-Job scriptblock - ## Description
I have a runbook which has a Start-Job scriptblock for authenticating and connecting to ADLS so that I can implement a timeout on the connection attempt. I have to implement this as occasionally a runbook will timeout after 3 hours of just trying to run Connect-AzAccounts.
Prior to the Start-Job scripblock approach this runbook was working fine, with the scriptblock it works locally on my PC, however now running as the runbook results in may errors which appear related to not being able to import modules.
Since Start-Job creates a new session I called Import-Module on the 3 required modules but this still isn't working.
## Steps to reproduce
`$TenantId = ""XXXXX""
$ResourceGroupName = ""XXXXX""
$StorageAccountName = ""XXXXX""
$Credential = Get-AutomationPSCredential -Name ""XXXXXXX""
$connectAzTimeout = 30
$connectAzTimer = [system.diagnostics.stopwatch]::StartNew()
$connectAzJob = Start-Job -ScriptBlock {
Import-Module Microsoft.PowerShell.Core -Force
Import-Module Az.Accounts -Force
Import-Module Az.Storage -Force
$connectAz = Connect-AzAccount -Credential $using:Credential -Tenant $using:TenantId -ServicePrincipal
$storageAccount = Get-AzStorageAccount -ResourceGroupName $using:ResourceGroupName -AccountName $using:StorageAccountName
$storageContext = $storageAccount.Context
$storageContext
}
Register-ObjectEvent $connectAzJob -EventName StateChanged -SourceIdentifier ConnectAzJobEnd -Action {
if ($sender.State -eq 'Completed') {
$global:storageContext = Receive-Job $connectAzJob
}
}
while (!$storageContext -And $connectAzTimer.Elapsed.TotalSeconds -le $connectAzTimeout) {
sleep -Seconds 1
}
`
## Environment data
The following AutomationAccount modules are installed in addition to all of the standard Automation Account modules:
Az.Accounts
Az.Storage
## Error output
`Cannot invoke method. Method invocation is supported only on core types in this language mode. + CategoryInfo : InvalidOperation: (:) [], RuntimeException + FullyQualifiedErrorId : MethodInvocationNotSupportedInConstrainedLanguage + PSComputerName : localhost`
`Could not load file or assembly 'Newtonsoft.Json, Version=10.0.0.0, Culture=neutral, PublicKeyToken=30ad4fe6b2a6aeed' or one of its dependencies. The system cannot find the file specified. + CategoryInfo : NotSpecified: (:) [Import-Module], FileNotFoundException + FullyQualifiedErrorId : System.IO.FileNotFoundException,Microsoft.PowerShell.Commands.ImportModuleCommand + PSComputerName : localhost`
`Could not load file or assembly 'Azure.Core, Version=1.9.0.0, Culture=neutral, PublicKeyToken=92742159e12e44c8' or one of its dependencies. The system cannot find the file specified. + CategoryInfo : NotSpecified: (:) [Import-Module], FileNotFoundException + FullyQualifiedErrorId : System.IO.FileNotFoundException,Microsoft.PowerShell.Commands.ImportModuleCommand + PSComputerName : localhost`
`The term 'Connect-AzAccount' is not recognized as the name of a cmdlet, function, script file, or operable program. Check the spelling of the name, or if a path was included, verify that the path is correct and try again. + CategoryInfo : ObjectNotFound: (Connect-AzAccount:String) [], CommandNotFoundException + FullyQualifiedErrorId : CommandNotFoundException + PSComputerName : localhost`
",1,runbook fails to call cmdlets within start job scriptblock description i have a runbook which has a start job scriptblock for authenticating and connecting to adls so that i can implement a timeout on the connection attempt i have to implement this as occasionally a runbook will timeout after hours of just trying to run connect azaccounts prior to the start job scripblock approach this runbook was working fine with the scriptblock it works locally on my pc however now running as the runbook results in may errors which appear related to not being able to import modules since start job creates a new session i called import module on the required modules but this still isn t working steps to reproduce tenantid xxxxx resourcegroupname xxxxx storageaccountname xxxxx credential get automationpscredential name xxxxxxx connectaztimeout connectaztimer startnew connectazjob start job scriptblock import module microsoft powershell core force import module az accounts force import module az storage force connectaz connect azaccount credential using credential tenant using tenantid serviceprincipal storageaccount get azstorageaccount resourcegroupname using resourcegroupname accountname using storageaccountname storagecontext storageaccount context storagecontext register objectevent connectazjob eventname statechanged sourceidentifier connectazjobend action if sender state eq completed global storagecontext receive job connectazjob while storagecontext and connectaztimer elapsed totalseconds le connectaztimeout sleep seconds environment data the following automationaccount modules are installed in addition to all of the standard automation account modules az accounts az storage error output cannot invoke method method invocation is supported only on core types in this language mode categoryinfo invalidoperation runtimeexception fullyqualifiederrorid methodinvocationnotsupportedinconstrainedlanguage pscomputername localhost could not load file or assembly newtonsoft json version culture neutral publickeytoken or one of its dependencies the system cannot find the file specified categoryinfo notspecified filenotfoundexception fullyqualifiederrorid system io filenotfoundexception microsoft powershell commands importmodulecommand pscomputername localhost could not load file or assembly azure core version culture neutral publickeytoken or one of its dependencies the system cannot find the file specified categoryinfo notspecified filenotfoundexception fullyqualifiederrorid system io filenotfoundexception microsoft powershell commands importmodulecommand pscomputername localhost the term connect azaccount is not recognized as the name of a cmdlet function script file or operable program check the spelling of the name or if a path was included verify that the path is correct and try again categoryinfo objectnotfound connect azaccount string commandnotfoundexception fullyqualifiederrorid commandnotfoundexception pscomputername localhost ,1
143293,19177912119.0,IssuesEvent,2021-12-04 00:04:36,samq-ghdemo/js-monorepo,https://api.github.com/repos/samq-ghdemo/js-monorepo,opened,"CVE-2017-1000228 (High) detected in ejs-0.8.8.tgz, ejs-1.0.0.tgz",security vulnerability,"## CVE-2017-1000228 - High Severity Vulnerability
Vulnerable Libraries - ejs-0.8.8.tgz, ejs-1.0.0.tgz
Path to dependency file: js-monorepo/vulnerable-node/package.json
Path to vulnerable library: js-monorepo/vulnerable-node/node_modules/ejs-locals/node_modules/ejs/package.json,js-monorepo/nodejs-goof/node_modules/ejs-locals/node_modules/ejs/package.json
Path to dependency file: js-monorepo/vulnerable-node/package.json
Path to vulnerable library: js-monorepo/vulnerable-node/node_modules/ejs-locals/node_modules/ejs/package.json,js-monorepo/nodejs-goof/node_modules/ejs-locals/node_modules/ejs/package.json
",0,cve high detected in ejs tgz ejs tgz cve high severity vulnerability vulnerable libraries ejs tgz ejs tgz ejs tgz embedded javascript templates library home page a href path to dependency file js monorepo vulnerable node package json path to vulnerable library js monorepo vulnerable node node modules ejs locals node modules ejs package json js monorepo nodejs goof node modules ejs locals node modules ejs package json dependency hierarchy ejs locals tgz root library x ejs tgz vulnerable library ejs tgz embedded javascript templates library home page a href path to dependency file js monorepo nodejs goof package json path to vulnerable library nodejs goof node modules ejs package json dependency hierarchy x ejs tgz vulnerable library found in head commit a href found in base branch main vulnerability details nodejs ejs versions older than is vulnerable to remote code execution due to weak input validation in ejs renderfile function publish date url a href cvss score details base score metrics exploitability metrics attack vector network attack complexity low privileges required none user interaction none scope unchanged impact metrics confidentiality impact high integrity impact high availability impact high for more information on scores click a href suggested fix type upgrade version origin a href release date fix resolution isopenpronvulnerability true ispackagebased true isdefaultbranch true packages istransitivedependency true dependencytree ejs locals ejs isminimumfixversionavailable true minimumfixversion isbinary false packagetype javascript node js packagename ejs packageversion packagefilepaths istransitivedependency false dependencytree ejs isminimumfixversionavailable true minimumfixversion isbinary false basebranches vulnerabilityidentifier cve vulnerabilitydetails nodejs ejs versions older than is vulnerable to remote code execution due to weak input validation in ejs renderfile function vulnerabilityurl ,0
9025,27394707000.0,IssuesEvent,2023-02-28 18:45:14,awslabs/aws-lambda-powertools-typescript,https://api.github.com/repos/awslabs/aws-lambda-powertools-typescript,closed,Maintenance: fix typo in doc publishing workflow,area/automation type/internal status/completed,"### Summary
In a recent PR (#1124) we have made some changes to the way docs are published and mistakenly introduced a bug in the workflow that wasn't spotted during the review. This caused the workflow [to fail during its first run](https://github.com/awslabs/aws-lambda-powertools-typescript/actions/runs/4296070251) after another unrelated PR was merged.
Based on the error message it appears that one of the GitHub global variables names has a typo: `input` vs `inputs` - with the latter being the correct one.
### Why is this needed?
To allow the workflow to run properly.
### Which area does this relate to?
Automation
### Solution
See linked PR.
### Acknowledgment
- [X] This request meets [Lambda Powertools Tenets](https://awslabs.github.io/aws-lambda-powertools-typescript/latest/#tenets)
- [ ] Should this be considered in other Lambda Powertools languages? i.e. [Python](https://github.com/awslabs/aws-lambda-powertools-python/), [Java](https://github.com/awslabs/aws-lambda-powertools-java/), and [.NET](https://github.com/awslabs/aws-lambda-powertools-dotnet/)
### Future readers
Please react with 👍 and your use case to help us understand customer demand.",1.0,"Maintenance: fix typo in doc publishing workflow - ### Summary
In a recent PR (#1124) we have made some changes to the way docs are published and mistakenly introduced a bug in the workflow that wasn't spotted during the review. This caused the workflow [to fail during its first run](https://github.com/awslabs/aws-lambda-powertools-typescript/actions/runs/4296070251) after another unrelated PR was merged.
Based on the error message it appears that one of the GitHub global variables names has a typo: `input` vs `inputs` - with the latter being the correct one.
### Why is this needed?
To allow the workflow to run properly.
### Which area does this relate to?
Automation
### Solution
See linked PR.
### Acknowledgment
- [X] This request meets [Lambda Powertools Tenets](https://awslabs.github.io/aws-lambda-powertools-typescript/latest/#tenets)
- [ ] Should this be considered in other Lambda Powertools languages? i.e. [Python](https://github.com/awslabs/aws-lambda-powertools-python/), [Java](https://github.com/awslabs/aws-lambda-powertools-java/), and [.NET](https://github.com/awslabs/aws-lambda-powertools-dotnet/)
### Future readers
Please react with 👍 and your use case to help us understand customer demand.",1,maintenance fix typo in doc publishing workflow summary in a recent pr we have made some changes to the way docs are published and mistakenly introduced a bug in the workflow that wasn t spotted during the review this caused the workflow after another unrelated pr was merged based on the error message it appears that one of the github global variables names has a typo input vs inputs with the latter being the correct one why is this needed to allow the workflow to run properly which area does this relate to automation solution see linked pr acknowledgment this request meets should this be considered in other lambda powertools languages i e and future readers please react with 👍 and your use case to help us understand customer demand ,1
63395,26378595483.0,IssuesEvent,2023-01-12 06:15:38,thkl/hap-homematic,https://api.github.com/repos/thkl/hap-homematic,closed,keinen zugriff von extern,enhancement DeviceService,"Hallo
ich versuche von extern auf den hap zuzugreifen doch dies funktioniert nicht. Solange ich im Wlan bin funktioniert alles sobald ich über LTE gehe geht nichts mehr",1.0,"keinen zugriff von extern - Hallo
ich versuche von extern auf den hap zuzugreifen doch dies funktioniert nicht. Solange ich im Wlan bin funktioniert alles sobald ich über LTE gehe geht nichts mehr",0,keinen zugriff von extern hallo ich versuche von extern auf den hap zuzugreifen doch dies funktioniert nicht solange ich im wlan bin funktioniert alles sobald ich über lte gehe geht nichts mehr,0
166849,26416761762.0,IssuesEvent,2023-01-13 16:32:45,openfoodfacts/openfoodfacts-server,https://api.github.com/repos/openfoodfacts/openfoodfacts-server,closed,"In the home page, the text ""Lastest products added"" is completely shifted to the left of the page ",bug new design,"### Describe the bug
In the home page, the text ""Lastest products added"" is completely shifted to the left of the page (see image).
This behaviour is visible on:
https://nl.openfoodfacts.org/
https://de.openfoodfacts.org/
https://es.openfoodfacts.org/
https://it.openfoodfacts.org/
(and maybe more)
### To Reproduce
Go to : https://nl.openfoodfacts.org/
### Expected behavior
It should be aligned with the other text sections
### Screenshots
_No response_
### Additional context
_No response_
### Type of device
Browser
### Browser version
_No response_
### Number of products impacted
_No response_
### Time per product
_No response_",1.0,"In the home page, the text ""Lastest products added"" is completely shifted to the left of the page - ### Describe the bug
In the home page, the text ""Lastest products added"" is completely shifted to the left of the page (see image).
This behaviour is visible on:
https://nl.openfoodfacts.org/
https://de.openfoodfacts.org/
https://es.openfoodfacts.org/
https://it.openfoodfacts.org/
(and maybe more)
### To Reproduce
Go to : https://nl.openfoodfacts.org/
### Expected behavior
It should be aligned with the other text sections
### Screenshots
_No response_
### Additional context
_No response_
### Type of device
Browser
### Browser version
_No response_
### Number of products impacted
_No response_
### Time per product
_No response_",0,in the home page the text lastest products added is completely shifted to the left of the page describe the bug in the home page the text lastest products added is completely shifted to the left of the page see image this behaviour is visible on and maybe more img width alt pasted graphic src to reproduce go to expected behavior it should be aligned with the other text sections screenshots no response additional context no response type of device browser browser version no response number of products impacted no response time per product no response ,0
300569,25977012528.0,IssuesEvent,2022-12-19 15:36:55,mehah/otclient,https://api.github.com/repos/mehah/otclient,closed,"No items to browse on market,TFS 1.4.2 (1098)",Priority: Medium Status: Pending Test Type: Bug,"### Priority
Medium
### Area
- [X] Data
- [X] Source
- [ ] Docker
- [ ] Other
### What happened?
When i open market it show with not items to browse.
Image: https://ibb.co/KbbjHrq
### What OS are you seeing the problem on?
Windows
### Code of Conduct
- [X] I agree to follow this project's Code of Conduct",1.0,"No items to browse on market,TFS 1.4.2 (1098) - ### Priority
Medium
### Area
- [X] Data
- [X] Source
- [ ] Docker
- [ ] Other
### What happened?
When i open market it show with not items to browse.
Image: https://ibb.co/KbbjHrq
### What OS are you seeing the problem on?
Windows
### Code of Conduct
- [X] I agree to follow this project's Code of Conduct",0,no items to browse on market tfs priority medium area data source docker other what happened when i open market it show with not items to browse image what os are you seeing the problem on windows code of conduct i agree to follow this project s code of conduct,0
666718,22365012732.0,IssuesEvent,2022-06-16 02:25:57,Railcraft/Railcraft,https://api.github.com/repos/Railcraft/Railcraft,closed,Possible Resource Leak: Iron Tanks,bug high priority,"**Description of the Bug**
Iron and Steel tanks may be leaking memory instead of liquids.
**To Reproduce**
Use an infinite fluid generator from whichever mod.
Pipe fluid into tank. Wait 90-180 minutes. Lag spikes will ensue.
**Negative Test**
Replace the RC tanks with tanks from another mod. No lag.
**Expected behavior**
No lag
**Additional context**
* railcraft-12.0.0.jar
* forge-14.23.5.2835
* infitech 3 modpack (infitechs gotta have RC by tradition. duh.)
* We spent several days trying to narrow down and isolate this to a specific mod before wasting anyone's time; that said, our evidence is only empirical. The common factor we've observed is Railcraft multiblock tanks, but it could also be that the various mods we've used for piping (Thermal Dynamics, Gregtech CE) are having difficulties working with the tanks for reasons specific to those mods.",1.0,"Possible Resource Leak: Iron Tanks - **Description of the Bug**
Iron and Steel tanks may be leaking memory instead of liquids.
**To Reproduce**
Use an infinite fluid generator from whichever mod.
Pipe fluid into tank. Wait 90-180 minutes. Lag spikes will ensue.
**Negative Test**
Replace the RC tanks with tanks from another mod. No lag.
**Expected behavior**
No lag
**Additional context**
* railcraft-12.0.0.jar
* forge-14.23.5.2835
* infitech 3 modpack (infitechs gotta have RC by tradition. duh.)
* We spent several days trying to narrow down and isolate this to a specific mod before wasting anyone's time; that said, our evidence is only empirical. The common factor we've observed is Railcraft multiblock tanks, but it could also be that the various mods we've used for piping (Thermal Dynamics, Gregtech CE) are having difficulties working with the tanks for reasons specific to those mods.",0,possible resource leak iron tanks description of the bug iron and steel tanks may be leaking memory instead of liquids to reproduce use an infinite fluid generator from whichever mod pipe fluid into tank wait minutes lag spikes will ensue negative test replace the rc tanks with tanks from another mod no lag expected behavior no lag additional context railcraft jar forge infitech modpack infitechs gotta have rc by tradition duh we spent several days trying to narrow down and isolate this to a specific mod before wasting anyone s time that said our evidence is only empirical the common factor we ve observed is railcraft multiblock tanks but it could also be that the various mods we ve used for piping thermal dynamics gregtech ce are having difficulties working with the tanks for reasons specific to those mods ,0
2270,11684987908.0,IssuesEvent,2020-03-05 08:09:11,baloise-incubator/gitopscli,https://api.github.com/repos/baloise-incubator/gitopscli,closed,Ensure commits are signed off,automation,"I took the contributing guidelines from our ""official"" template: https://github.com/baloise/repository-template-java/blob/master/CONTRIBUTING.md
Those guidelines require all contributors to sign off their commit: https://baloise-incubator.github.io/gitopscli/contributing/#sign-your-work-developer-certificate-of-origin
We should automatically check that.",1.0,"Ensure commits are signed off - I took the contributing guidelines from our ""official"" template: https://github.com/baloise/repository-template-java/blob/master/CONTRIBUTING.md
Those guidelines require all contributors to sign off their commit: https://baloise-incubator.github.io/gitopscli/contributing/#sign-your-work-developer-certificate-of-origin
We should automatically check that.",1,ensure commits are signed off i took the contributing guidelines from our official template those guidelines require all contributors to sign off their commit we should automatically check that ,1
218700,17016483457.0,IssuesEvent,2021-07-02 12:47:51,ubtue/tuefind,https://api.github.com/repos/ubtue/tuefind,closed,Probleme mit responsive design,System: IxTheo System: KrimDok System: RelBib ready for testing,"In einer bestimmten Skalierung (bei mir etwas mehr als der halbe Bildschirm) der Portalwebsite kann man den Pfeil der Dropdownliste der ""Sortieren""-Funktion nicht anklicken, vmtl. weil er in die Facette ""Erscheinungsjahr"" hineinragt:

Mit Bitte um Anpassung.",1.0,"Probleme mit responsive design - In einer bestimmten Skalierung (bei mir etwas mehr als der halbe Bildschirm) der Portalwebsite kann man den Pfeil der Dropdownliste der ""Sortieren""-Funktion nicht anklicken, vmtl. weil er in die Facette ""Erscheinungsjahr"" hineinragt:

Mit Bitte um Anpassung.",0,probleme mit responsive design in einer bestimmten skalierung bei mir etwas mehr als der halbe bildschirm der portalwebsite kann man den pfeil der dropdownliste der sortieren funktion nicht anklicken vmtl weil er in die facette erscheinungsjahr hineinragt mit bitte um anpassung ,0
329847,24237090000.0,IssuesEvent,2022-09-27 01:02:18,DataLinkDC/dlink,https://api.github.com/repos/DataLinkDC/dlink,closed,[Document][doc] Update website ,documentation,"### Search before asking
- [X] I had searched in the [issues](https://github.com/DataLinkDC/dlink/issues?q=is%3Aissue) and found no similar document requirement.
### Description
_No response_
### Are you willing to submit a PR?
- [X] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://www.apache.org/foundation/policies/conduct)
",1.0,"[Document][doc] Update website - ### Search before asking
- [X] I had searched in the [issues](https://github.com/DataLinkDC/dlink/issues?q=is%3Aissue) and found no similar document requirement.
### Description
_No response_
### Are you willing to submit a PR?
- [X] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://www.apache.org/foundation/policies/conduct)
",0, update website search before asking i had searched in the and found no similar document requirement description no response are you willing to submit a pr yes i am willing to submit a pr code of conduct i agree to follow this project s ,0
782908,27511107112.0,IssuesEvent,2023-03-06 08:53:43,pdx-blurp/blurp-frontend,https://api.github.com/repos/pdx-blurp/blurp-frontend,closed,Remove modal pop-up from node creation,high priority enhancement,"Currently the user has to fill out a modal form with node information when creating a node - this is a slow process.
AC:
When a user creates a node using the node tool, the node should just be placed instead of the modal popping up
All node data should be changeable from the data sidebar
This also requires that creating a new node does not require any data from the user",1.0,"Remove modal pop-up from node creation - Currently the user has to fill out a modal form with node information when creating a node - this is a slow process.
AC:
When a user creates a node using the node tool, the node should just be placed instead of the modal popping up
All node data should be changeable from the data sidebar
This also requires that creating a new node does not require any data from the user",0,remove modal pop up from node creation currently the user has to fill out a modal form with node information when creating a node this is a slow process ac when a user creates a node using the node tool the node should just be placed instead of the modal popping up all node data should be changeable from the data sidebar this also requires that creating a new node does not require any data from the user,0
6355,22909753810.0,IssuesEvent,2022-07-16 04:56:46,MicrosoftDocs/azure-docs,https://api.github.com/repos/MicrosoftDocs/azure-docs,closed,Could use clarification on Hybrid Runbook Workers with Private Link,automation/svc triaged cxp doc-enhancement /subsvc Pri2,"Please provide clarification on the following point:
With the current implementation of Private Links for Azure Automation, it only supports running jobs on the Hybrid Runbook Worker connected to an Azure virtual network and does not support cloud jobs.
There's 2 places we can run Hybrid Runbook Workers. The first is on an Azure VM and the second is on an on-premises VM outside of Azure. Can the point be clarified to explicitly state whether on-premises VMs are supported to run Hybrid Runbook Workers against an Automation Account with Private Link enabled?
---
#### Document Details
⚠ *Do not edit this section. It is required for docs.microsoft.com ➟ GitHub issue linking.*
* ID: 29645401-45f3-a1d9-2dd0-e561d49b0eb3
* Version Independent ID: 27677330-fa36-53df-e83d-44f3e5e7a93c
* Content: [Use Azure Private Link to securely connect networks to Azure Automation](https://docs.microsoft.com/en-us/azure/automation/how-to/private-link-security)
* Content Source: [articles/automation/how-to/private-link-security.md](https://github.com/MicrosoftDocs/azure-docs/blob/main/articles/automation/how-to/private-link-security.md)
* Service: **automation**
* Sub-service: ****
* GitHub Login: @SGSneha
* Microsoft Alias: **sudhirsneha**",1.0,"Could use clarification on Hybrid Runbook Workers with Private Link - Please provide clarification on the following point:
With the current implementation of Private Links for Azure Automation, it only supports running jobs on the Hybrid Runbook Worker connected to an Azure virtual network and does not support cloud jobs.
There's 2 places we can run Hybrid Runbook Workers. The first is on an Azure VM and the second is on an on-premises VM outside of Azure. Can the point be clarified to explicitly state whether on-premises VMs are supported to run Hybrid Runbook Workers against an Automation Account with Private Link enabled?
---
#### Document Details
⚠ *Do not edit this section. It is required for docs.microsoft.com ➟ GitHub issue linking.*
* ID: 29645401-45f3-a1d9-2dd0-e561d49b0eb3
* Version Independent ID: 27677330-fa36-53df-e83d-44f3e5e7a93c
* Content: [Use Azure Private Link to securely connect networks to Azure Automation](https://docs.microsoft.com/en-us/azure/automation/how-to/private-link-security)
* Content Source: [articles/automation/how-to/private-link-security.md](https://github.com/MicrosoftDocs/azure-docs/blob/main/articles/automation/how-to/private-link-security.md)
* Service: **automation**
* Sub-service: ****
* GitHub Login: @SGSneha
* Microsoft Alias: **sudhirsneha**",1,could use clarification on hybrid runbook workers with private link please provide clarification on the following point with the current implementation of private links for azure automation it only supports running jobs on the hybrid runbook worker connected to an azure virtual network and does not support cloud jobs there s places we can run hybrid runbook workers the first is on an azure vm and the second is on an on premises vm outside of azure can the point be clarified to explicitly state whether on premises vms are supported to run hybrid runbook workers against an automation account with private link enabled document details ⚠ do not edit this section it is required for docs microsoft com ➟ github issue linking id version independent id content content source service automation sub service github login sgsneha microsoft alias sudhirsneha ,1
104592,22701980386.0,IssuesEvent,2022-07-05 11:32:43,VirtusLab/git-machete,https://api.github.com/repos/VirtusLab/git-machete,opened,Use keyword-only arguments in more places across the codebase,good first issue code quality,"Like with `git_machete.client.MacheteClient.create_github_pr`.
Mostly to prevent confusing order of params of the same type (such that mypy wouldn't capture if params are swapped).
Generally, `create_github_pr` isn't in fact the best candidate (albeit there are still 2 arguments of `LocalBranchShortName` type).",1.0,"Use keyword-only arguments in more places across the codebase - Like with `git_machete.client.MacheteClient.create_github_pr`.
Mostly to prevent confusing order of params of the same type (such that mypy wouldn't capture if params are swapped).
Generally, `create_github_pr` isn't in fact the best candidate (albeit there are still 2 arguments of `LocalBranchShortName` type).",0,use keyword only arguments in more places across the codebase like with git machete client macheteclient create github pr mostly to prevent confusing order of params of the same type such that mypy wouldn t capture if params are swapped generally create github pr isn t in fact the best candidate albeit there are still arguments of localbranchshortname type ,0
305144,9359840570.0,IssuesEvent,2019-04-02 08:00:44,geosolutions-it/tdipisa,https://api.github.com/repos/geosolutions-it/tdipisa,reopened,SCIADRO - Plan Work,Priority: High task,"- [ ] Planning of the UI work in MS2
- [ ] Planning of the backend part and start checking data ingestion to review model etc",1.0,"SCIADRO - Plan Work - - [ ] Planning of the UI work in MS2
- [ ] Planning of the backend part and start checking data ingestion to review model etc",0,sciadro plan work planning of the ui work in planning of the backend part and start checking data ingestion to review model etc,0
8719,27172165187.0,IssuesEvent,2023-02-17 20:30:55,OneDrive/onedrive-api-docs,https://api.github.com/repos/OneDrive/onedrive-api-docs,closed,Search on group not working without Sites.Read.All permission,type:bug area:Docs automation:Closed,"Searching in a Group Drive with a **Client Credentials** (Application permission) token doesn't work with the `Files.ReadWrite.All` permission. For example:
`https://graph.microsoft.com/v1.0/groups/{GROUP_ID}/drive/root/search(q='newFileTest.docx')`
... results in a 403 Forbidden error:
```json
{
""error"": {
""code"": ""accessDenied"",
""message"": ""The caller does not have permission to perform the action."",
""innerError"": {
""request-id"": ""**redacted**"",
""date"": ""2019-04-17T12:47:10""
}
}
}
```
Only when the **Client Credentials** (Application permission) token has the `Sites.ReadWrite.All` permission (probably `Sites.Read.All` is enough already), search works. Please update the docs to clarify this behavior.
---
#### Document Details
⚠ *Do not edit this section. It is required for docs.microsoft.com ➟ GitHub issue linking.*
* ID: 6a094aa0-adc5-9f08-d75e-bff6c5c42b4b
* Version Independent ID: c674446f-0d55-5c4b-0d0a-bbddf184dd1b
* Content: [Search for files - OneDrive API - OneDrive dev center](https://docs.microsoft.com/en-us/onedrive/developer/rest-api/api/driveitem_search?view=odsp-graph-online#feedback)
* Content Source: [docs/rest-api/api/driveitem_search.md](https://github.com/OneDrive/onedrive-api-docs/blob/live/docs/rest-api/api/driveitem_search.md)
* Product: **onedrive**
* GitHub Login: @rgregg
* Microsoft Alias: **rgregg**",1.0,"Search on group not working without Sites.Read.All permission - Searching in a Group Drive with a **Client Credentials** (Application permission) token doesn't work with the `Files.ReadWrite.All` permission. For example:
`https://graph.microsoft.com/v1.0/groups/{GROUP_ID}/drive/root/search(q='newFileTest.docx')`
... results in a 403 Forbidden error:
```json
{
""error"": {
""code"": ""accessDenied"",
""message"": ""The caller does not have permission to perform the action."",
""innerError"": {
""request-id"": ""**redacted**"",
""date"": ""2019-04-17T12:47:10""
}
}
}
```
Only when the **Client Credentials** (Application permission) token has the `Sites.ReadWrite.All` permission (probably `Sites.Read.All` is enough already), search works. Please update the docs to clarify this behavior.
---
#### Document Details
⚠ *Do not edit this section. It is required for docs.microsoft.com ➟ GitHub issue linking.*
* ID: 6a094aa0-adc5-9f08-d75e-bff6c5c42b4b
* Version Independent ID: c674446f-0d55-5c4b-0d0a-bbddf184dd1b
* Content: [Search for files - OneDrive API - OneDrive dev center](https://docs.microsoft.com/en-us/onedrive/developer/rest-api/api/driveitem_search?view=odsp-graph-online#feedback)
* Content Source: [docs/rest-api/api/driveitem_search.md](https://github.com/OneDrive/onedrive-api-docs/blob/live/docs/rest-api/api/driveitem_search.md)
* Product: **onedrive**
* GitHub Login: @rgregg
* Microsoft Alias: **rgregg**",1,search on group not working without sites read all permission searching in a group drive with a client credentials application permission token doesn t work with the files readwrite all permission for example results in a forbidden error json error code accessdenied message the caller does not have permission to perform the action innererror request id redacted date only when the client credentials application permission token has the sites readwrite all permission probably sites read all is enough already search works please update the docs to clarify this behavior document details ⚠ do not edit this section it is required for docs microsoft com ➟ github issue linking id version independent id content content source product onedrive github login rgregg microsoft alias rgregg ,1
254997,27484692871.0,IssuesEvent,2023-03-04 01:08:50,panasalap/linux-4.1.15,https://api.github.com/repos/panasalap/linux-4.1.15,opened,CVE-2018-14615 (Medium) detected in linux-yocto-devv4.2.8,security vulnerability,"## CVE-2018-14615 - Medium Severity Vulnerability
Vulnerable Library - linux-yocto-devv4.2.8
Linux Embedded Kernel - tracks the next mainline release
An issue was discovered in the Linux kernel through 4.17.10. There is a buffer overflow in truncate_inline_inode() in fs/f2fs/inline.c when umounting an f2fs image, because a length value may be negative.
***
Step up your Open Source Security Game with Mend [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)",True,"CVE-2018-14615 (Medium) detected in linux-yocto-devv4.2.8 - ## CVE-2018-14615 - Medium Severity Vulnerability
Vulnerable Library - linux-yocto-devv4.2.8
Linux Embedded Kernel - tracks the next mainline release
An issue was discovered in the Linux kernel through 4.17.10. There is a buffer overflow in truncate_inline_inode() in fs/f2fs/inline.c when umounting an f2fs image, because a length value may be negative.
***
Step up your Open Source Security Game with Mend [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)",0,cve medium detected in linux yocto cve medium severity vulnerability vulnerable library linux yocto linux embedded kernel tracks the next mainline release library home page a href found in base branch master vulnerable source files fs inline c fs inline c vulnerability details an issue was discovered in the linux kernel through there is a buffer overflow in truncate inline inode in fs inline c when umounting an image because a length value may be negative publish date url a href cvss score details base score metrics exploitability metrics attack vector local attack complexity low privileges required none user interaction required scope unchanged impact metrics confidentiality impact none integrity impact none availability impact high for more information on scores click a href suggested fix type upgrade version origin a href release date fix resolution step up your open source security game with mend ,0
3124,13133250489.0,IssuesEvent,2020-08-06 20:32:06,MLH-Fellowship/nodemaker,https://api.github.com/repos/MLH-Fellowship/nodemaker,closed,Handle errors based on error object,backend mid priority revisit automation,"Revisit the possibility to automate error handling based on the error object:
```ts
} catch (error) {
// TODO: Replace TODO_ERROR_STATUS_CODE and TODO_ERROR_MESSAGE based on the error object returned by API.
if (TODO_ERROR_STATUS_CODE === 401) {
// Return a clear error
throw new Error('The Hacker News credentials are invalid!');
}
if (TODO_ERROR_MESSAGE) {
// Try to return the error prettier
throw new Error(`Hacker News error response [${TODO_ERROR_STATUS_CODE}]: ${TODO_ERROR_MESSAGE}`);
}
// If that data does not exist for some reason, return the actual error.
throw error;
}
```",1.0,"Handle errors based on error object - Revisit the possibility to automate error handling based on the error object:
```ts
} catch (error) {
// TODO: Replace TODO_ERROR_STATUS_CODE and TODO_ERROR_MESSAGE based on the error object returned by API.
if (TODO_ERROR_STATUS_CODE === 401) {
// Return a clear error
throw new Error('The Hacker News credentials are invalid!');
}
if (TODO_ERROR_MESSAGE) {
// Try to return the error prettier
throw new Error(`Hacker News error response [${TODO_ERROR_STATUS_CODE}]: ${TODO_ERROR_MESSAGE}`);
}
// If that data does not exist for some reason, return the actual error.
throw error;
}
```",1,handle errors based on error object revisit the possibility to automate error handling based on the error object ts catch error todo replace todo error status code and todo error message based on the error object returned by api if todo error status code return a clear error throw new error the hacker news credentials are invalid if todo error message try to return the error prettier throw new error hacker news error response todo error message if that data does not exist for some reason return the actual error throw error ,1
84093,16451570695.0,IssuesEvent,2021-05-21 06:43:53,jz-feng/shiba-cafe,https://api.github.com/repos/jz-feng/shiba-cafe,closed,testing/debugging framework to trigger game states,code dev efficiency future,"eg. trigger a certain recipe, change timer durations, etc",1.0,"testing/debugging framework to trigger game states - eg. trigger a certain recipe, change timer durations, etc",0,testing debugging framework to trigger game states eg trigger a certain recipe change timer durations etc,0
1382,10014108863.0,IssuesEvent,2019-07-15 16:40:59,OpenZeppelin/openzeppelin-solidity,https://api.github.com/repos/OpenZeppelin/openzeppelin-solidity,closed,Migrate to Truffle's test node,automation,"Things to look out for:
- Custom account balances
- Gas limit?",1.0,"Migrate to Truffle's test node - Things to look out for:
- Custom account balances
- Gas limit?",1,migrate to truffle s test node things to look out for custom account balances gas limit ,1
144256,19286094642.0,IssuesEvent,2021-12-11 01:36:07,Tim-sandbox/WebGoat-8.1,https://api.github.com/repos/Tim-sandbox/WebGoat-8.1,opened,CVE-2021-22096 (Medium) detected in spring-web-5.3.9.jar,security vulnerability,"## CVE-2021-22096 - Medium Severity Vulnerability
Vulnerable Library - spring-web-5.3.9.jar
Path to dependency file: WebGoat-8.1/webgoat-container/pom.xml
Path to vulnerable library: /home/wss-scanner/.m2/repository/org/springframework/spring-web/5.3.9/spring-web-5.3.9.jar,/home/wss-scanner/.m2/repository/org/springframework/spring-web/5.3.9/spring-web-5.3.9.jar,/home/wss-scanner/.m2/repository/org/springframework/spring-web/5.3.9/spring-web-5.3.9.jar
In Spring Framework versions 5.3.0 - 5.3.10, 5.2.0 - 5.2.17, and older unsupported versions, it is possible for a user to provide malicious input to cause the insertion of additional log entries.
Path to dependency file: WebGoat-8.1/webgoat-container/pom.xml
Path to vulnerable library: /home/wss-scanner/.m2/repository/org/springframework/spring-web/5.3.9/spring-web-5.3.9.jar,/home/wss-scanner/.m2/repository/org/springframework/spring-web/5.3.9/spring-web-5.3.9.jar,/home/wss-scanner/.m2/repository/org/springframework/spring-web/5.3.9/spring-web-5.3.9.jar
In Spring Framework versions 5.3.0 - 5.3.10, 5.2.0 - 5.2.17, and older unsupported versions, it is possible for a user to provide malicious input to cause the insertion of additional log entries.
",0,cve medium detected in spring web jar cve medium severity vulnerability vulnerable library spring web jar spring web library home page a href path to dependency file webgoat webgoat container pom xml path to vulnerable library home wss scanner repository org springframework spring web spring web jar home wss scanner repository org springframework spring web spring web jar home wss scanner repository org springframework spring web spring web jar dependency hierarchy spring boot starter web jar root library x spring web jar vulnerable library found in base branch develop vulnerability details in spring framework versions and older unsupported versions it is possible for a user to provide malicious input to cause the insertion of additional log entries publish date url a href cvss score details base score metrics exploitability metrics attack vector network attack complexity low privileges required low user interaction none scope unchanged impact metrics confidentiality impact none integrity impact low availability impact none for more information on scores click a href suggested fix type upgrade version origin a href release date fix resolution org springframework spring release isopenpronvulnerability true ispackagebased true isdefaultbranch true packages istransitivedependency true dependencytree org springframework boot spring boot starter web org springframework spring web isminimumfixversionavailable true minimumfixversion org springframework spring release isbinary false basebranches vulnerabilityidentifier cve vulnerabilitydetails in spring framework versions and older unsupported versions it is possible for a user to provide malicious input to cause the insertion of additional log entries vulnerabilityurl ,0
5096,18670237300.0,IssuesEvent,2021-10-30 15:15:38,aws/aws-cli,https://api.github.com/repos/aws/aws-cli,reopened,Question: aws ec2 wait instance-status-ok and termination,feature-request configuration waiter automation-exempt,"Given I launch a new EC2 instance and want to wait for its status to become _ok_. Now, before that happens, the instance terminates. There is no way for the waiter to ever satisfy in that situation. However, it waits until it times out after 10 minutes.
```
aws ec2 wait instance-status-ok --instance-ids i-0123456789abcdef
# waits 10 min… during that time the EC2 instance gets terminated…
Waiter InstanceStatusOk failed: Max attempts exceeded
```
Wouldn't it be possible for a waiter to fail more quickly when it can detect that a condition can no longer be met? e.g. fail with a specific error code, indicating that the condition can not be met.
",1.0,"Question: aws ec2 wait instance-status-ok and termination - Given I launch a new EC2 instance and want to wait for its status to become _ok_. Now, before that happens, the instance terminates. There is no way for the waiter to ever satisfy in that situation. However, it waits until it times out after 10 minutes.
```
aws ec2 wait instance-status-ok --instance-ids i-0123456789abcdef
# waits 10 min… during that time the EC2 instance gets terminated…
Waiter InstanceStatusOk failed: Max attempts exceeded
```
Wouldn't it be possible for a waiter to fail more quickly when it can detect that a condition can no longer be met? e.g. fail with a specific error code, indicating that the condition can not be met.
",1,question aws wait instance status ok and termination given i launch a new instance and want to wait for its status to become ok now before that happens the instance terminates there is no way for the waiter to ever satisfy in that situation however it waits until it times out after minutes aws wait instance status ok instance ids i waits min… during that time the instance gets terminated… waiter instancestatusok failed max attempts exceeded wouldn t it be possible for a waiter to fail more quickly when it can detect that a condition can no longer be met e g fail with a specific error code indicating that the condition can not be met ,1
4440,16546701370.0,IssuesEvent,2021-05-28 01:30:48,SAP/fundamental-ngx,https://api.github.com/repos/SAP/fundamental-ngx,closed,bug: (platform) menu - focus is missing for avatar menu,E2E automation Medium ariba bug platform,"#### Is this a bug, enhancement, or feature request?
bug
#### Briefly describe your proposal.
One of the avatar menu buttons is skipped on tab navigation and focus state is not shown
#### If this is a bug, please provide steps for reproducing it.
1. go to https://fundamental-ngx.netlify.app/#/platform/menu
2. look at the `Menu with Horizontal Positioning` examples
3. use tab key to navigate through all options

",1.0,"bug: (platform) menu - focus is missing for avatar menu - #### Is this a bug, enhancement, or feature request?
bug
#### Briefly describe your proposal.
One of the avatar menu buttons is skipped on tab navigation and focus state is not shown
#### If this is a bug, please provide steps for reproducing it.
1. go to https://fundamental-ngx.netlify.app/#/platform/menu
2. look at the `Menu with Horizontal Positioning` examples
3. use tab key to navigate through all options

",1,bug platform menu focus is missing for avatar menu is this a bug enhancement or feature request bug briefly describe your proposal one of the avatar menu buttons is skipped on tab navigation and focus state is not shown if this is a bug please provide steps for reproducing it go to look at the menu with horizontal positioning examples use tab key to navigate through all options ,1
236144,25971501893.0,IssuesEvent,2022-12-19 11:37:02,nk7598/linux-4.19.72,https://api.github.com/repos/nk7598/linux-4.19.72,closed,WS-2021-0522 (Medium) detected in linuxlinux-4.19.269 - autoclosed,security vulnerability,"## WS-2021-0522 - Medium Severity Vulnerability
Vulnerable Library - linuxlinux-4.19.269
***
Step up your Open Source Security Game with Mend [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)",True,"WS-2021-0522 (Medium) detected in linuxlinux-4.19.269 - autoclosed - ## WS-2021-0522 - Medium Severity Vulnerability
Vulnerable Library - linuxlinux-4.19.269
***
Step up your Open Source Security Game with Mend [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)",0,ws medium detected in linuxlinux autoclosed ws medium severity vulnerability vulnerable library linuxlinux the linux kernel library home page a href found in head commit a href vulnerable source files fs aio c vulnerability details in linux kernel is vulnerable to use after free due to missing pollfree handling in fs aio c publish date url a href cvss score details base score metrics exploitability metrics attack vector local attack complexity low privileges required none user interaction none scope unchanged impact metrics confidentiality impact none integrity impact none availability impact high for more information on scores click a href suggested fix type upgrade version origin a href release date fix resolution linux kernel step up your open source security game with mend ,0
6516,23309787217.0,IssuesEvent,2022-08-08 07:06:34,pingcap/tidb,https://api.github.com/repos/pingcap/tidb,closed,[lightning] no retries when pd timeout,type/bug severity/moderate component/br component/lightning found/automation,"## Bug Report
Please answer these questions before submitting your issue. Thanks!
### 1. Minimal reproduce step (Required)
It is observed that there is no retry when pd timeout
### 2. What did you expect to see? (Required)
There should be retry mechanism when pd connection timeout during lightning import
### 3. What did you see instead (Required)
Lightning import fails immediately when PD server timeout
```
2022/01/05 03:12:47.110 +00:00] [INFO] [restore.go:444] [""the whole procedure start""]
[2022/01/05 03:12:47.111 +00:00] [INFO] [restore.go:748] [""restore all schema start""]
[2022/01/05 03:14:30.165 +00:00] [INFO] [restore.go:767] [""restore all schema completed""] [takeTime=1m43.053950663s] []
[2022/01/05 03:14:42.712 +00:00] [ERROR] [restore.go:462] [""run failed""] [step=2] [error=""Error 9001: PD server timeout""]
[2022/01/05 03:14:42.712 +00:00] [ERROR] [restore.go:473] [""the whole procedure failed""] [takeTime=1m55.602514675s] [error=""Error 9001: PD server timeout""]
[2022/01/05 03:14:42.713 +00:00] [WARN] [local.go:501] [""remove local db file failed""] [error=""unlinkat /tmp/sorted-kv-dir: device or resource busy""]
```
### 4. What is your TiDB version? (Required)
/ # /tidb-lightning -V
Release Version: v5.4.0
Git Commit Hash: 974b5784adbbd47d14659916d47dd986effa7b4e
Git Branch: heads/refs/tags/v5.4.0
Go Version: go1.16.4
UTC Build Time: 2022-01-03 10:01:05
Race Enabled: false
",1.0,"[lightning] no retries when pd timeout - ## Bug Report
Please answer these questions before submitting your issue. Thanks!
### 1. Minimal reproduce step (Required)
It is observed that there is no retry when pd timeout
### 2. What did you expect to see? (Required)
There should be retry mechanism when pd connection timeout during lightning import
### 3. What did you see instead (Required)
Lightning import fails immediately when PD server timeout
```
2022/01/05 03:12:47.110 +00:00] [INFO] [restore.go:444] [""the whole procedure start""]
[2022/01/05 03:12:47.111 +00:00] [INFO] [restore.go:748] [""restore all schema start""]
[2022/01/05 03:14:30.165 +00:00] [INFO] [restore.go:767] [""restore all schema completed""] [takeTime=1m43.053950663s] []
[2022/01/05 03:14:42.712 +00:00] [ERROR] [restore.go:462] [""run failed""] [step=2] [error=""Error 9001: PD server timeout""]
[2022/01/05 03:14:42.712 +00:00] [ERROR] [restore.go:473] [""the whole procedure failed""] [takeTime=1m55.602514675s] [error=""Error 9001: PD server timeout""]
[2022/01/05 03:14:42.713 +00:00] [WARN] [local.go:501] [""remove local db file failed""] [error=""unlinkat /tmp/sorted-kv-dir: device or resource busy""]
```
### 4. What is your TiDB version? (Required)
/ # /tidb-lightning -V
Release Version: v5.4.0
Git Commit Hash: 974b5784adbbd47d14659916d47dd986effa7b4e
Git Branch: heads/refs/tags/v5.4.0
Go Version: go1.16.4
UTC Build Time: 2022-01-03 10:01:05
Race Enabled: false
",1, no retries when pd timeout bug report please answer these questions before submitting your issue thanks minimal reproduce step required it is observed that there is no retry when pd timeout what did you expect to see required there should be retry mechanism when pd connection timeout during lightning import what did you see instead required lightning import fails immediately when pd server timeout what is your tidb version required tidb lightning v release version git commit hash git branch heads refs tags go version utc build time race enabled false ,1
1865,10987498009.0,IssuesEvent,2019-12-02 09:20:42,sourcegraph/sourcegraph,https://api.github.com/repos/sourcegraph/sourcegraph,reopened,a8n: Implement retryCampaign mutation,automation,"This is a follow-up to [RFC 42](https://docs.google.com/document/d/1j85PoL6NOzLX_PHFzBQogZcnttYK0BXj9XnrxF3DYmA/edit).
Right now, when the `createCampaign` mutation with a given `CampaignPlan` ID fails due to various reasons (GitHub not reachable, token invalid, gitserver down, ...) the conversion of `ChangesetJobs` into `Changesets` in the `(&a8n.Service).runChangesetJob` ends with the `ChangesetJobs` having its `Error` field populated. [See the code here](https://sourcegraph.com/github.com/sourcegraph/sourcegraph@034cee47c83a7734cf04a4b5720717665b5a69db/-/blob/enterprise/pkg/a8n/service.go#L114)
What we want is a `retryCampaign` mutation that
* takes in a `Campaign` ID
* loads all the failed `ChangesetJobs` (definition: `finished_at` is null, or `error` is non-blank, or no `Changeset` with its `changeset_job_id` exists)
* uses `(&a8n.Service).runChangesetJob` to try again to create a commit from the given diff in the connected CampaignJob, push the commit, open a pull request on the codehost, save the pull request as an external service
**Important**: for that to work, the `runChangesetJob` method must be idempotent! That means: if it runs twice with the same `ChangesetJob` is **cannot create duplicate pull requests!**. That means it needs to check that new commits are not added to same branch, check for `ErrAlreadyExists` response from code hosts, early-exit if a `Changset` with the given `changeset_job_id` exists, etc.
",1.0,"a8n: Implement retryCampaign mutation - This is a follow-up to [RFC 42](https://docs.google.com/document/d/1j85PoL6NOzLX_PHFzBQogZcnttYK0BXj9XnrxF3DYmA/edit).
Right now, when the `createCampaign` mutation with a given `CampaignPlan` ID fails due to various reasons (GitHub not reachable, token invalid, gitserver down, ...) the conversion of `ChangesetJobs` into `Changesets` in the `(&a8n.Service).runChangesetJob` ends with the `ChangesetJobs` having its `Error` field populated. [See the code here](https://sourcegraph.com/github.com/sourcegraph/sourcegraph@034cee47c83a7734cf04a4b5720717665b5a69db/-/blob/enterprise/pkg/a8n/service.go#L114)
What we want is a `retryCampaign` mutation that
* takes in a `Campaign` ID
* loads all the failed `ChangesetJobs` (definition: `finished_at` is null, or `error` is non-blank, or no `Changeset` with its `changeset_job_id` exists)
* uses `(&a8n.Service).runChangesetJob` to try again to create a commit from the given diff in the connected CampaignJob, push the commit, open a pull request on the codehost, save the pull request as an external service
**Important**: for that to work, the `runChangesetJob` method must be idempotent! That means: if it runs twice with the same `ChangesetJob` is **cannot create duplicate pull requests!**. That means it needs to check that new commits are not added to same branch, check for `ErrAlreadyExists` response from code hosts, early-exit if a `Changset` with the given `changeset_job_id` exists, etc.
",1, implement retrycampaign mutation this is a follow up to right now when the createcampaign mutation with a given campaignplan id fails due to various reasons github not reachable token invalid gitserver down the conversion of changesetjobs into changesets in the service runchangesetjob ends with the changesetjobs having its error field populated what we want is a retrycampaign mutation that takes in a campaign id loads all the failed changesetjobs definition finished at is null or error is non blank or no changeset with its changeset job id exists uses service runchangesetjob to try again to create a commit from the given diff in the connected campaignjob push the commit open a pull request on the codehost save the pull request as an external service important for that to work the runchangesetjob method must be idempotent that means if it runs twice with the same changesetjob is cannot create duplicate pull requests that means it needs to check that new commits are not added to same branch check for erralreadyexists response from code hosts early exit if a changset with the given changeset job id exists etc ,1
6,2632908674.0,IssuesEvent,2015-03-08 16:51:29,houeland/kolproxy,https://api.github.com/repos/houeland/kolproxy,closed,QuestLog prereq for finding black market is too greedy,Component: Automation Priority: High Type: Bug,"I was doing a standard run and proxy burned all my adventures in the black forest after the black market was already unlocked without buying the forged identification documents, I believe because of this change: https://github.com/houeland/kolproxy/commit/428ecc23d7b8df140aa1aaacc70da08d667421e0#diff-6ded12ca806b9f315deec9ee1b5296d9L4686
",1.0,"QuestLog prereq for finding black market is too greedy - I was doing a standard run and proxy burned all my adventures in the black forest after the black market was already unlocked without buying the forged identification documents, I believe because of this change: https://github.com/houeland/kolproxy/commit/428ecc23d7b8df140aa1aaacc70da08d667421e0#diff-6ded12ca806b9f315deec9ee1b5296d9L4686
",1,questlog prereq for finding black market is too greedy i was doing a standard run and proxy burned all my adventures in the black forest after the black market was already unlocked without buying the forged identification documents i believe because of this change ,1
248348,7929517475.0,IssuesEvent,2018-07-06 15:19:29,nco/nco,https://api.github.com/repos/nco/nco,closed,Add optional scale_factor/add_offset arguments to ncap2 packing routine,medium priority,"Please make scale_factor and add_offset optional arguments to the ncap2 pack() method, so users can specify these parameters as per discussion in:
https://sourceforge.net/p/nco/discussion/9829/thread/2bd55075/?limit=25#f65b",1.0,"Add optional scale_factor/add_offset arguments to ncap2 packing routine - Please make scale_factor and add_offset optional arguments to the ncap2 pack() method, so users can specify these parameters as per discussion in:
https://sourceforge.net/p/nco/discussion/9829/thread/2bd55075/?limit=25#f65b",0,add optional scale factor add offset arguments to packing routine please make scale factor and add offset optional arguments to the pack method so users can specify these parameters as per discussion in ,0
6760,23865152993.0,IssuesEvent,2022-09-07 10:21:46,mozilla-mobile/firefox-ios,https://api.github.com/repos/mozilla-mobile/firefox-ios,closed,[Bitrise] Smoketest for iPad is wrong running the Full Functional test plan instead,eng:automation,"For the workflow RunSmokeXCUITestsiPad https://github.com/mozilla-mobile/firefox-ios/blob/c6e718220467ff3b55505e993b8f88151c3b4747/bitrise.yml#L725 we are running the Full Functional test suite instead of just the Smoketest plan. See[ bitrise logs](https://app.bitrise.io/build/87a9400b-4f76-4ba0-a358-ec3c4e21b684?tab=log).
The reason is that there is a - test_plan: option that we need to use instead of the - xcodebuild_test_options: ""-testPlan SmokeXCUITests"". The latter one is unkown and the test plan appears unset, that's why bitrise is running all the tests in the Fennec_Enterprise_XCUITests instead of just the SmokeTest
cc @clarmso I will take this one and ask you for review so that you start being familiar with the bitrise.yml file.
┆Issue is synchronized with this [Jira Task](https://mozilla-hub.atlassian.net/browse/FXIOS-4865)
",1.0,"[Bitrise] Smoketest for iPad is wrong running the Full Functional test plan instead - For the workflow RunSmokeXCUITestsiPad https://github.com/mozilla-mobile/firefox-ios/blob/c6e718220467ff3b55505e993b8f88151c3b4747/bitrise.yml#L725 we are running the Full Functional test suite instead of just the Smoketest plan. See[ bitrise logs](https://app.bitrise.io/build/87a9400b-4f76-4ba0-a358-ec3c4e21b684?tab=log).
The reason is that there is a - test_plan: option that we need to use instead of the - xcodebuild_test_options: ""-testPlan SmokeXCUITests"". The latter one is unkown and the test plan appears unset, that's why bitrise is running all the tests in the Fennec_Enterprise_XCUITests instead of just the SmokeTest
cc @clarmso I will take this one and ask you for review so that you start being familiar with the bitrise.yml file.
┆Issue is synchronized with this [Jira Task](https://mozilla-hub.atlassian.net/browse/FXIOS-4865)
",1, smoketest for ipad is wrong running the full functional test plan instead for the workflow runsmokexcuitestsipad we are running the full functional test suite instead of just the smoketest plan see the reason is that there is a test plan option that we need to use instead of the xcodebuild test options testplan smokexcuitests the latter one is unkown and the test plan appears unset that s why bitrise is running all the tests in the fennec enterprise xcuitests instead of just the smoketest cc clarmso i will take this one and ask you for review so that you start being familiar with the bitrise yml file ┆issue is synchronized with this ,1
337481,30248327302.0,IssuesEvent,2023-07-06 18:18:47,unifyai/ivy,https://api.github.com/repos/unifyai/ivy,opened,Fix jax_lax_operators.test_jax_conj,JAX Frontend Sub Task Failing Test,"| | |
|---|---|
|paddle|
|torch|
",1.0,"Fix jax_lax_operators.test_jax_conj - | | |
|---|---|
|paddle|
|torch|
",0,fix jax lax operators test jax conj paddle a href src torch a href src ,0
377880,26274064057.0,IssuesEvent,2023-01-06 20:00:50,profusion/.github,https://api.github.com/repos/profusion/.github,opened,feat: create `README.md` for `ProFUSION`'s GitHub page,documentation help wanted,"## Description
This repository is special, as GitHub shares its properties (like PR templates) across different repos in `ProFUSION`'s
ownership. This also happens with the `README.md` file, which will appear in the main page of PF's profile here on GitHub. With that, we need to write a good portfolio-like file that will be visible to anyone on GitHub.
## Implementation details
Since the file will be public, we can't link to any private repos. Instead, we should detail things like:
- Our work ethic
- Mention some projects we've worked on
- Link our developer's profiles
- Mention technologies we excel in
- Use attractive graphs/images/gifs/animations in the file, like the best GitHub profiles do
These aren't hard points, all of this should be discussed with other employees (or employers) to make this a shared effort
## Potential caveats
Any little point that you're not sure can be shared in public needs to be addressed first. Get confirmation with @barbieri or @bdilly BEFORE pushing your changes to the remote (even if it's a wip).
Needless to say, Both Barbieri and Dilly should approve the file for it to be merged
## Additional context and visual reference
Here are some cool profiles from some of our developers that can serve as inspiration:
- [Felipe Bergamin](https://github.com/felipebergamin)
- [Daniel Céspedes](https://github.com/devDanielCespedes)
- [Ricardo Dalarme](https://github.com/ricardodalarme)",1.0,"feat: create `README.md` for `ProFUSION`'s GitHub page - ## Description
This repository is special, as GitHub shares its properties (like PR templates) across different repos in `ProFUSION`'s
ownership. This also happens with the `README.md` file, which will appear in the main page of PF's profile here on GitHub. With that, we need to write a good portfolio-like file that will be visible to anyone on GitHub.
## Implementation details
Since the file will be public, we can't link to any private repos. Instead, we should detail things like:
- Our work ethic
- Mention some projects we've worked on
- Link our developer's profiles
- Mention technologies we excel in
- Use attractive graphs/images/gifs/animations in the file, like the best GitHub profiles do
These aren't hard points, all of this should be discussed with other employees (or employers) to make this a shared effort
## Potential caveats
Any little point that you're not sure can be shared in public needs to be addressed first. Get confirmation with @barbieri or @bdilly BEFORE pushing your changes to the remote (even if it's a wip).
Needless to say, Both Barbieri and Dilly should approve the file for it to be merged
## Additional context and visual reference
Here are some cool profiles from some of our developers that can serve as inspiration:
- [Felipe Bergamin](https://github.com/felipebergamin)
- [Daniel Céspedes](https://github.com/devDanielCespedes)
- [Ricardo Dalarme](https://github.com/ricardodalarme)",0,feat create readme md for profusion s github page description this repository is special as github shares its properties like pr templates across different repos in profusion s ownership this also happens with the readme md file which will appear in the main page of pf s profile here on github with that we need to write a good portfolio like file that will be visible to anyone on github implementation details since the file will be public we can t link to any private repos instead we should detail things like our work ethic mention some projects we ve worked on link our developer s profiles mention technologies we excel in use attractive graphs images gifs animations in the file like the best github profiles do these aren t hard points all of this should be discussed with other employees or employers to make this a shared effort potential caveats any little point that you re not sure can be shared in public needs to be addressed first get confirmation with barbieri or bdilly before pushing your changes to the remote even if it s a wip needless to say both barbieri and dilly should approve the file for it to be merged additional context and visual reference here are some cool profiles from some of our developers that can serve as inspiration ,0
7437,24880133668.0,IssuesEvent,2022-10-27 23:41:26,AzureAD/microsoft-authentication-library-for-objc,https://api.github.com/repos/AzureAD/microsoft-authentication-library-for-objc,closed,Automation tests failure,automation failure,"@AzureAD/appleidentity
Automation failed for [AzureAD/microsoft-authentication-library-for-objc](https://github.com/AzureAD/microsoft-authentication-library-for-objc) ran against commit : Fixed MSAL build [21f3ada6821bd2839827c49ba1c8c47594e6be81]
Pipeline URL : [https://identitydivision.visualstudio.com/IDDP/_build/results?buildId=1004872&view=logs](https://identitydivision.visualstudio.com/IDDP/_build/results?buildId=1004872&view=logs)",1.0,"Automation tests failure - @AzureAD/appleidentity
Automation failed for [AzureAD/microsoft-authentication-library-for-objc](https://github.com/AzureAD/microsoft-authentication-library-for-objc) ran against commit : Fixed MSAL build [21f3ada6821bd2839827c49ba1c8c47594e6be81]
Pipeline URL : [https://identitydivision.visualstudio.com/IDDP/_build/results?buildId=1004872&view=logs](https://identitydivision.visualstudio.com/IDDP/_build/results?buildId=1004872&view=logs)",1,automation tests failure azuread appleidentity automation failed for ran against commit fixed msal build pipeline url ,1
2872,12740762016.0,IssuesEvent,2020-06-26 03:45:14,pacorain/home,https://api.github.com/repos/pacorain/home,opened,"Notification: It's raining, you should close the windows!",new automation,Home Assistant should send everyone a notification when the weather changes to rainy and there are open windows.,1.0,"Notification: It's raining, you should close the windows! - Home Assistant should send everyone a notification when the weather changes to rainy and there are open windows.",1,notification it s raining you should close the windows home assistant should send everyone a notification when the weather changes to rainy and there are open windows ,1
5523,19910617943.0,IssuesEvent,2022-01-25 16:48:23,bcgov/api-services-portal,https://api.github.com/repos/bcgov/api-services-portal,closed,Test Rate limiting - Global level,automation aps-demo,"Apply Global Rate limiting at Route level(Completed)
1. set api rate limit to global service level
1.1 Post the API request to set rate limiting at route level to Kong
1.2 Verify the request status should be 201 created
2. Verify that Rate limiting is set at global route level
2.1 Make the request call to Kong gateway and verify that service returns 200 status code and verify 'x-ratelimit-remaining-hour' in response header
3. set api rate limit to 1 request per min, local Redis and Scope as Route
3.1 Mark(access-manager) login to the application
3.2 Set the namespace
3.3 Navigate to Consumers Page under Namespaces
3.4 Click on the consumer
3.5 Click on Rate Limiting option
3.6 Enter 1 request in hour input field, Select 'Redis' in Policy drop down and select scope as Route in Rate limiting popup
3.7 Click on Apply button
4. verify rate limit error when the API calls beyond the limit
4.1 Make the request call to Kong gateway and verify that service returns 200 status code
4.2 Make another request call to Kong gateway and verify that service returns 429 status code and 'API rate limit exceeded' message in the response when user the calls the API beyond the limit
Apply Global Rate limiting at Service level(Completed)
1. set api rate limit to global service level
1.1 Post the API request to set rate limiting at Service level to Kong
1.2 Verify the request status should be 201 created
2. Verify that Rate limiting is set at global service level
2.1 Make the request call to Kong gateway and verify that service returns 200 status code and verify 'x-ratelimit-remaining-hour' in response header
3. set api rate limit to 1 request per min, local Redis and Scope as Service
3.1 Mark(access-manager) login to the application
3.2 Set the namespace
3.3 Navigate to Consumers Page under Namespaces
3.4 Click on the consumer
3.5 Click on Rate Limiting option
3.6 Enter 1 request in hour input field, Select 'Redis' in Policy drop down and select scope as Service in Rate limiting popup
3.7 Click on Apply button
4. verify rate limit error when the API calls beyond the limit
4.1 Make the request call to Kong gateway and verify that service returns 200 status code
4.2 Make another request call to Kong gateway and verify that service returns 429 status code and 'API rate limit exceeded' message in the response when user the calls the API beyond the limit",1.0,"Test Rate limiting - Global level - Apply Global Rate limiting at Route level(Completed)
1. set api rate limit to global service level
1.1 Post the API request to set rate limiting at route level to Kong
1.2 Verify the request status should be 201 created
2. Verify that Rate limiting is set at global route level
2.1 Make the request call to Kong gateway and verify that service returns 200 status code and verify 'x-ratelimit-remaining-hour' in response header
3. set api rate limit to 1 request per min, local Redis and Scope as Route
3.1 Mark(access-manager) login to the application
3.2 Set the namespace
3.3 Navigate to Consumers Page under Namespaces
3.4 Click on the consumer
3.5 Click on Rate Limiting option
3.6 Enter 1 request in hour input field, Select 'Redis' in Policy drop down and select scope as Route in Rate limiting popup
3.7 Click on Apply button
4. verify rate limit error when the API calls beyond the limit
4.1 Make the request call to Kong gateway and verify that service returns 200 status code
4.2 Make another request call to Kong gateway and verify that service returns 429 status code and 'API rate limit exceeded' message in the response when user the calls the API beyond the limit
Apply Global Rate limiting at Service level(Completed)
1. set api rate limit to global service level
1.1 Post the API request to set rate limiting at Service level to Kong
1.2 Verify the request status should be 201 created
2. Verify that Rate limiting is set at global service level
2.1 Make the request call to Kong gateway and verify that service returns 200 status code and verify 'x-ratelimit-remaining-hour' in response header
3. set api rate limit to 1 request per min, local Redis and Scope as Service
3.1 Mark(access-manager) login to the application
3.2 Set the namespace
3.3 Navigate to Consumers Page under Namespaces
3.4 Click on the consumer
3.5 Click on Rate Limiting option
3.6 Enter 1 request in hour input field, Select 'Redis' in Policy drop down and select scope as Service in Rate limiting popup
3.7 Click on Apply button
4. verify rate limit error when the API calls beyond the limit
4.1 Make the request call to Kong gateway and verify that service returns 200 status code
4.2 Make another request call to Kong gateway and verify that service returns 429 status code and 'API rate limit exceeded' message in the response when user the calls the API beyond the limit",1,test rate limiting global level apply global rate limiting at route level completed set api rate limit to global service level post the api request to set rate limiting at route level to kong verify the request status should be created verify that rate limiting is set at global route level make the request call to kong gateway and verify that service returns status code and verify x ratelimit remaining hour in response header set api rate limit to request per min local redis and scope as route mark access manager login to the application set the namespace navigate to consumers page under namespaces click on the consumer click on rate limiting option enter request in hour input field select redis in policy drop down and select scope as route in rate limiting popup click on apply button verify rate limit error when the api calls beyond the limit make the request call to kong gateway and verify that service returns status code make another request call to kong gateway and verify that service returns status code and api rate limit exceeded message in the response when user the calls the api beyond the limit apply global rate limiting at service level completed set api rate limit to global service level post the api request to set rate limiting at service level to kong verify the request status should be created verify that rate limiting is set at global service level make the request call to kong gateway and verify that service returns status code and verify x ratelimit remaining hour in response header set api rate limit to request per min local redis and scope as service mark access manager login to the application set the namespace navigate to consumers page under namespaces click on the consumer click on rate limiting option enter request in hour input field select redis in policy drop down and select scope as service in rate limiting popup click on apply button verify rate limit error when the api calls beyond the limit make the request call to kong gateway and verify that service returns status code make another request call to kong gateway and verify that service returns status code and api rate limit exceeded message in the response when user the calls the api beyond the limit,1
1007,9122064856.0,IssuesEvent,2019-02-23 03:52:56,MicrosoftDocs/azure-docs,https://api.github.com/repos/MicrosoftDocs/azure-docs,closed,Monitoring agent does not install Automation dependencies,automation/svc,"PATH C:\Program Files\Microsoft Monitoring Agent\Agent\AzureAutomation\ does not exist after installing monitoring agent, manual installation cannot be completed
---
#### Document Details
⚠ *Do not edit this section. It is required for docs.microsoft.com ➟ GitHub issue linking.*
* ID: 7b29372c-7bd9-7da2-4cff-9afbb432bccf
* Version Independent ID: 66ce101d-d21b-3fdf-be70-7f9cadc1570e
* Content: [Azure Automation Windows Hybrid Runbook Worker](https://docs.microsoft.com/en-us/azure/automation/automation-windows-hrw-install#manual-deployment)
* Content Source: [articles/automation/automation-windows-hrw-install.md](https://github.com/Microsoft/azure-docs/blob/master/articles/automation/automation-windows-hrw-install.md)
* Service: **automation**
* GitHub Login: @georgewallace
* Microsoft Alias: **gwallace**",1.0,"Monitoring agent does not install Automation dependencies - PATH C:\Program Files\Microsoft Monitoring Agent\Agent\AzureAutomation\ does not exist after installing monitoring agent, manual installation cannot be completed
---
#### Document Details
⚠ *Do not edit this section. It is required for docs.microsoft.com ➟ GitHub issue linking.*
* ID: 7b29372c-7bd9-7da2-4cff-9afbb432bccf
* Version Independent ID: 66ce101d-d21b-3fdf-be70-7f9cadc1570e
* Content: [Azure Automation Windows Hybrid Runbook Worker](https://docs.microsoft.com/en-us/azure/automation/automation-windows-hrw-install#manual-deployment)
* Content Source: [articles/automation/automation-windows-hrw-install.md](https://github.com/Microsoft/azure-docs/blob/master/articles/automation/automation-windows-hrw-install.md)
* Service: **automation**
* GitHub Login: @georgewallace
* Microsoft Alias: **gwallace**",1,monitoring agent does not install automation dependencies path c program files microsoft monitoring agent agent azureautomation does not exist after installing monitoring agent manual installation cannot be completed document details ⚠ do not edit this section it is required for docs microsoft com ➟ github issue linking id version independent id content content source service automation github login georgewallace microsoft alias gwallace ,1
498547,14409879942.0,IssuesEvent,2020-12-04 03:22:18,pulumi/pulumi,https://api.github.com/repos/pulumi/pulumi,closed,`pulumi plugin install --reinstall` doesn't work for Python projects,kind/bug language/python priority/P1,"#### Problem
`pulumi plugin install --reinstall` doesn't work for Python projects. Running with verbose output seems to indicate zero required plugins were found.
Looking through the code I suspect this `TODO` is the issue.
https://github.com/pulumi/pulumi/blob/89c956d18942c1fcbf687da3052dd26089d8f486/sdk/python/cmd/pulumi-language-python/main.go#L144-L149
#### Verbose Output
```
% pulumi plugin install --reinstall -v9 --logtostderr
I1019 16:15:34.901943 11777 pulumi.go:129] skipping update check
I1019 16:15:34.906450 11777 plugins.go:470] GetPluginPath(language, python, ): found on $PATH /usr/local/bin/pulumi-language-python
I1019 16:15:34.906923 11777 plugin.go:83] Launching plugin 'python' from '/usr/local/bin/pulumi-language-python' with args: 127.0.0.1:61623
I1019 16:15:34.932973 11777 langruntime_plugin.go:178] langhost[python].GetPluginInfo() executing
I1019 16:15:34.934407 11777 langruntime_plugin.go:91] langhost[python].GetRequiredPlugins(proj=aws-py-fargate,pwd=/Users/clstokes/go/src/github.com/pulumi/examples/aws-py-fargate,program=.) executing
I1019 16:15:34.935436 11777 langruntime_plugin.go:133] langhost[python].GetRequiredPlugins(proj=aws-py-fargate,pwd=/Users/clstokes/go/src/github.com/pulumi/examples/aws-py-fargate,program=.) success: #versions=0
I1019 16:15:34.935522 11777 plugins.go:470] GetPluginPath(language, python, ): found on $PATH /usr/local/bin/pulumi-language-python
```
",1.0,"`pulumi plugin install --reinstall` doesn't work for Python projects - #### Problem
`pulumi plugin install --reinstall` doesn't work for Python projects. Running with verbose output seems to indicate zero required plugins were found.
Looking through the code I suspect this `TODO` is the issue.
https://github.com/pulumi/pulumi/blob/89c956d18942c1fcbf687da3052dd26089d8f486/sdk/python/cmd/pulumi-language-python/main.go#L144-L149
#### Verbose Output
```
% pulumi plugin install --reinstall -v9 --logtostderr
I1019 16:15:34.901943 11777 pulumi.go:129] skipping update check
I1019 16:15:34.906450 11777 plugins.go:470] GetPluginPath(language, python, ): found on $PATH /usr/local/bin/pulumi-language-python
I1019 16:15:34.906923 11777 plugin.go:83] Launching plugin 'python' from '/usr/local/bin/pulumi-language-python' with args: 127.0.0.1:61623
I1019 16:15:34.932973 11777 langruntime_plugin.go:178] langhost[python].GetPluginInfo() executing
I1019 16:15:34.934407 11777 langruntime_plugin.go:91] langhost[python].GetRequiredPlugins(proj=aws-py-fargate,pwd=/Users/clstokes/go/src/github.com/pulumi/examples/aws-py-fargate,program=.) executing
I1019 16:15:34.935436 11777 langruntime_plugin.go:133] langhost[python].GetRequiredPlugins(proj=aws-py-fargate,pwd=/Users/clstokes/go/src/github.com/pulumi/examples/aws-py-fargate,program=.) success: #versions=0
I1019 16:15:34.935522 11777 plugins.go:470] GetPluginPath(language, python, ): found on $PATH /usr/local/bin/pulumi-language-python
```
",0, pulumi plugin install reinstall doesn t work for python projects problem pulumi plugin install reinstall doesn t work for python projects running with verbose output seems to indicate zero required plugins were found looking through the code i suspect this todo is the issue verbose output pulumi plugin install reinstall logtostderr pulumi go skipping update check plugins go getpluginpath language python found on path usr local bin pulumi language python plugin go launching plugin python from usr local bin pulumi language python with args langruntime plugin go langhost getplugininfo executing langruntime plugin go langhost getrequiredplugins proj aws py fargate pwd users clstokes go src github com pulumi examples aws py fargate program executing langruntime plugin go langhost getrequiredplugins proj aws py fargate pwd users clstokes go src github com pulumi examples aws py fargate program success versions plugins go getpluginpath language python found on path usr local bin pulumi language python ,0
307361,9416130636.0,IssuesEvent,2019-04-10 14:05:23,robotframework/robotframework,https://api.github.com/repos/robotframework/robotframework,closed,Deprecate omitting lines with only `...`,deprecation enhancement priority: medium,"At the moment lines with only the continuation marker `...` are handled somewhat inconsistently.
1. When used with documentation in the settings section or with the `[Documentation]` setting such a line creates an empty documentation line needed to form paragraphs:
```robotframework
*** Settings ***
Documentation First row.
... Second row.
...
... First row of the second paragraph.
```
2. When used as an argument to a keyword, such rows are ignored. For example, here `Keyword` ends up called with two arguments and the line with only `...` is totally ignored:
```robotframework
*** Test Cases ***
Example
Keyword argument
...
... another argument
```
This different handling of lines with only `...` causes problems for the new parsed in RF 3.2 (#3076) similarly as #3105 and #3106. It seems that the best way to handle this problem is making lines with only `...` equivalent to lines with only a single empty value. That won't affect the usage with documentation where this syntax currently is needed and thus actually used. The change obviously affects the usage with keywords as in the second example above, but because currently the syntax has no effect nobody should have any reasons to use it.
We are going to deprecate using `...` without a meaning in RF 3.1.2 and then change the behavior in RF 3.2.",1.0,"Deprecate omitting lines with only `...` - At the moment lines with only the continuation marker `...` are handled somewhat inconsistently.
1. When used with documentation in the settings section or with the `[Documentation]` setting such a line creates an empty documentation line needed to form paragraphs:
```robotframework
*** Settings ***
Documentation First row.
... Second row.
...
... First row of the second paragraph.
```
2. When used as an argument to a keyword, such rows are ignored. For example, here `Keyword` ends up called with two arguments and the line with only `...` is totally ignored:
```robotframework
*** Test Cases ***
Example
Keyword argument
...
... another argument
```
This different handling of lines with only `...` causes problems for the new parsed in RF 3.2 (#3076) similarly as #3105 and #3106. It seems that the best way to handle this problem is making lines with only `...` equivalent to lines with only a single empty value. That won't affect the usage with documentation where this syntax currently is needed and thus actually used. The change obviously affects the usage with keywords as in the second example above, but because currently the syntax has no effect nobody should have any reasons to use it.
We are going to deprecate using `...` without a meaning in RF 3.1.2 and then change the behavior in RF 3.2.",0,deprecate omitting lines with only at the moment lines with only the continuation marker are handled somewhat inconsistently when used with documentation in the settings section or with the setting such a line creates an empty documentation line needed to form paragraphs robotframework settings documentation first row second row first row of the second paragraph when used as an argument to a keyword such rows are ignored for example here keyword ends up called with two arguments and the line with only is totally ignored robotframework test cases example keyword argument another argument this different handling of lines with only causes problems for the new parsed in rf similarly as and it seems that the best way to handle this problem is making lines with only equivalent to lines with only a single empty value that won t affect the usage with documentation where this syntax currently is needed and thus actually used the change obviously affects the usage with keywords as in the second example above but because currently the syntax has no effect nobody should have any reasons to use it we are going to deprecate using without a meaning in rf and then change the behavior in rf ,0
155141,13612621601.0,IssuesEvent,2020-09-23 10:34:43,DigitalExcellence/dex-backend,https://api.github.com/repos/DigitalExcellence/dex-backend,opened,Research: Do we need Audit Logging for our applications?,documentation,"**Is your feature request related to a problem? Please describe.**
Audit Logging can be important in an application. Both for security and debugging.
**Describe the solution you'd like**
Investigate what the benefits are of audit logging, what the drawbacks are.
Investigate what kind of audit logging there is.
Investigate what we could and what we should log for our use case.
Investigate what the implications are in terms of privacy/gdpr.
**Additional context**
I'm especially interested in knowing how we should handle GDPR with this. I also think this will be most useful for the Identity Server to log when a user logs in, when they log out, when a role is changed. But also when a project is deleted or other distructive actions like that.
",1.0,"Research: Do we need Audit Logging for our applications? - **Is your feature request related to a problem? Please describe.**
Audit Logging can be important in an application. Both for security and debugging.
**Describe the solution you'd like**
Investigate what the benefits are of audit logging, what the drawbacks are.
Investigate what kind of audit logging there is.
Investigate what we could and what we should log for our use case.
Investigate what the implications are in terms of privacy/gdpr.
**Additional context**
I'm especially interested in knowing how we should handle GDPR with this. I also think this will be most useful for the Identity Server to log when a user logs in, when they log out, when a role is changed. But also when a project is deleted or other distructive actions like that.
",0,research do we need audit logging for our applications is your feature request related to a problem please describe audit logging can be important in an application both for security and debugging describe the solution you d like investigate what the benefits are of audit logging what the drawbacks are investigate what kind of audit logging there is investigate what we could and what we should log for our use case investigate what the implications are in terms of privacy gdpr additional context i m especially interested in knowing how we should handle gdpr with this i also think this will be most useful for the identity server to log when a user logs in when they log out when a role is changed but also when a project is deleted or other distructive actions like that ,0
5611,20196525016.0,IssuesEvent,2022-02-11 11:10:14,Music-Bot-for-Jitsi/Jimmi,https://api.github.com/repos/Music-Bot-for-Jitsi/Jimmi,opened,Setup CodeCov or Codeclimate,automation,"
> **As a** developer
> **I want** my code to be checked automatically for test coverage on each push on main
> **so that** I have clear metrics showing my code quality.
## Description:
Setup CodeCov or Codeclimate for this repo using GitHub Actions.
### 🟢 In scope:
### 🔴 Not in scope:
## What should be the result?
",1.0,"Setup CodeCov or Codeclimate -
> **As a** developer
> **I want** my code to be checked automatically for test coverage on each push on main
> **so that** I have clear metrics showing my code quality.
## Description:
Setup CodeCov or Codeclimate for this repo using GitHub Actions.
### 🟢 In scope:
### 🔴 Not in scope:
## What should be the result?
",1,setup codecov or codeclimate as a developer i want my code to be checked automatically for test coverage on each push on main so that i have clear metrics showing my code quality description setup codecov or codeclimate for this repo using github actions 🟢 in scope 🔴 not in scope what should be the result ,1
74573,25184650576.0,IssuesEvent,2022-11-11 16:43:40,idaholab/moose,https://api.github.com/repos/idaholab/moose,opened,Sibling transfer issues warning if multiapps are on different execute_ons,C: Framework T: defect P: normal,"## Bug Description
When performing a sub-app to sub-app transfer, i.e. sibling transfer, where the multiapps have different `execute_on`s, it is impossible to avoid a warning stating that the transfer does not have the same `execute_on` flags as either multiapp. It is my opinion that the transfer should execute during the `to_multi_app` execution since sibling transfers are before multiapp execution. And there should only be a warning if it doesn't match the `to_multi_app`
## Steps to Reproduce
Here is the main and sub app inputs recreating the issue:
main.i:
```
[MultiApps]
[sub1]
type = FullSolveMultiApp
input_files = sibling_sub.i
execute_on = TIMESTEP_BEGIN
cli_args = 'Variables/u/initial_condition=2'
[]
[sub2]
type = FullSolveMultiApp
input_files = sibling_sub.i
execute_on = TIMESTEP_END
[]
[]
[Transfers]
[sibling_transfer]
type = MultiAppCopyTransfer
from_multi_app = sub1
to_multi_app = sub2
source_variable = u
variable = u
[]
[]
[Mesh]
[min]
type = GeneratedMeshGenerator
dim = 1
nx = 1
[]
[]
[Problem]
solve = false
kernel_coverage_check = false
skip_nl_system_check = true
verbose_multiapps = true
[]
[Executioner]
type = Steady
[]
```
sibling_sub.i:
```
[Variables/u]
[]
[Postprocessors/avg_u]
type = ElementAverageValue
variable = u
[]
[Mesh]
[min]
type = GeneratedMeshGenerator
dim = 1
nx = 1
[]
[]
[Problem]
solve = false
kernel_coverage_check = false
skip_nl_system_check = true
verbose_multiapps = true
[]
[Executioner]
type = Steady
[]
```
Without specifying the transfer's `execute_on` it seems to want to execute on every possible exec flag (`INITIAL`, `TIMESTEP_BEGIN`, `TIMESTEP_END`, `FINAL`) and it issues the warning:
```
*** Warning ***
The following warning occurred in the object ""sibling_transfer"", of type ""MultiAppCopyTransfer"".
MultiAppTransfer execute_on flags do not match associated to_multi_app execute_on flags
```
With `Transfers/sibling_transfer/execute_on=TIMESTEP_END` it again executes on every flag and issues the warning:
```
*** Warning ***
The following warning occurred in the object ""sibling_transfer"", of type ""MultiAppCopyTransfer"".
MultiAppTransfer execute_on flags do not match associated from_multi_app execute_on flags
```
Finally, with `Transfers/sibling_transfer/execute_on='TIMESTEP_BEGIN TIMESTEP_END'` it executes on every flag and issues the warning:
```
*** Warning ***
The following warning occurred in the object ""sibling_transfer"", of type ""MultiAppCopyTransfer"".
MultiAppTransfer execute_on flags do not match associated from_multi_app execute_on flags
*** Warning ***
The following warning occurred in the object ""sibling_transfer"", of type ""MultiAppCopyTransfer"".
MultiAppTransfer execute_on flags do not match associated to_multi_app execute_on flags
```
To summarize:
- Transfer wants to execute on every flag, no matter what.
- No `execute_on`: warning for `to_multi_app`
- `execute_on=TIMESTEP_END`: warning for `from_multi_app`
- `execute_on='TIMESTEP_BEGIN TIMESTEP_END'`: warning for both `from_multi_app` and `to_multi_app`
## Impact
Two major impacts:
1. Although it does not affect the answer, performing transfers on every exec flag could be costly for some transfers.
2. The unavoidable warning message is erroneous (IMO) and a test with this type of transfer will always fail because of the warning.",1.0,"Sibling transfer issues warning if multiapps are on different execute_ons - ## Bug Description
When performing a sub-app to sub-app transfer, i.e. sibling transfer, where the multiapps have different `execute_on`s, it is impossible to avoid a warning stating that the transfer does not have the same `execute_on` flags as either multiapp. It is my opinion that the transfer should execute during the `to_multi_app` execution since sibling transfers are before multiapp execution. And there should only be a warning if it doesn't match the `to_multi_app`
## Steps to Reproduce
Here is the main and sub app inputs recreating the issue:
main.i:
```
[MultiApps]
[sub1]
type = FullSolveMultiApp
input_files = sibling_sub.i
execute_on = TIMESTEP_BEGIN
cli_args = 'Variables/u/initial_condition=2'
[]
[sub2]
type = FullSolveMultiApp
input_files = sibling_sub.i
execute_on = TIMESTEP_END
[]
[]
[Transfers]
[sibling_transfer]
type = MultiAppCopyTransfer
from_multi_app = sub1
to_multi_app = sub2
source_variable = u
variable = u
[]
[]
[Mesh]
[min]
type = GeneratedMeshGenerator
dim = 1
nx = 1
[]
[]
[Problem]
solve = false
kernel_coverage_check = false
skip_nl_system_check = true
verbose_multiapps = true
[]
[Executioner]
type = Steady
[]
```
sibling_sub.i:
```
[Variables/u]
[]
[Postprocessors/avg_u]
type = ElementAverageValue
variable = u
[]
[Mesh]
[min]
type = GeneratedMeshGenerator
dim = 1
nx = 1
[]
[]
[Problem]
solve = false
kernel_coverage_check = false
skip_nl_system_check = true
verbose_multiapps = true
[]
[Executioner]
type = Steady
[]
```
Without specifying the transfer's `execute_on` it seems to want to execute on every possible exec flag (`INITIAL`, `TIMESTEP_BEGIN`, `TIMESTEP_END`, `FINAL`) and it issues the warning:
```
*** Warning ***
The following warning occurred in the object ""sibling_transfer"", of type ""MultiAppCopyTransfer"".
MultiAppTransfer execute_on flags do not match associated to_multi_app execute_on flags
```
With `Transfers/sibling_transfer/execute_on=TIMESTEP_END` it again executes on every flag and issues the warning:
```
*** Warning ***
The following warning occurred in the object ""sibling_transfer"", of type ""MultiAppCopyTransfer"".
MultiAppTransfer execute_on flags do not match associated from_multi_app execute_on flags
```
Finally, with `Transfers/sibling_transfer/execute_on='TIMESTEP_BEGIN TIMESTEP_END'` it executes on every flag and issues the warning:
```
*** Warning ***
The following warning occurred in the object ""sibling_transfer"", of type ""MultiAppCopyTransfer"".
MultiAppTransfer execute_on flags do not match associated from_multi_app execute_on flags
*** Warning ***
The following warning occurred in the object ""sibling_transfer"", of type ""MultiAppCopyTransfer"".
MultiAppTransfer execute_on flags do not match associated to_multi_app execute_on flags
```
To summarize:
- Transfer wants to execute on every flag, no matter what.
- No `execute_on`: warning for `to_multi_app`
- `execute_on=TIMESTEP_END`: warning for `from_multi_app`
- `execute_on='TIMESTEP_BEGIN TIMESTEP_END'`: warning for both `from_multi_app` and `to_multi_app`
## Impact
Two major impacts:
1. Although it does not affect the answer, performing transfers on every exec flag could be costly for some transfers.
2. The unavoidable warning message is erroneous (IMO) and a test with this type of transfer will always fail because of the warning.",0,sibling transfer issues warning if multiapps are on different execute ons bug description when performing a sub app to sub app transfer i e sibling transfer where the multiapps have different execute on s it is impossible to avoid a warning stating that the transfer does not have the same execute on flags as either multiapp it is my opinion that the transfer should execute during the to multi app execution since sibling transfers are before multiapp execution and there should only be a warning if it doesn t match the to multi app steps to reproduce here is the main and sub app inputs recreating the issue main i type fullsolvemultiapp input files sibling sub i execute on timestep begin cli args variables u initial condition type fullsolvemultiapp input files sibling sub i execute on timestep end type multiappcopytransfer from multi app to multi app source variable u variable u type generatedmeshgenerator dim nx solve false kernel coverage check false skip nl system check true verbose multiapps true type steady sibling sub i type elementaveragevalue variable u type generatedmeshgenerator dim nx solve false kernel coverage check false skip nl system check true verbose multiapps true type steady without specifying the transfer s execute on it seems to want to execute on every possible exec flag initial timestep begin timestep end final and it issues the warning warning the following warning occurred in the object sibling transfer of type multiappcopytransfer multiapptransfer execute on flags do not match associated to multi app execute on flags with transfers sibling transfer execute on timestep end it again executes on every flag and issues the warning warning the following warning occurred in the object sibling transfer of type multiappcopytransfer multiapptransfer execute on flags do not match associated from multi app execute on flags finally with transfers sibling transfer execute on timestep begin timestep end it executes on every flag and issues the warning warning the following warning occurred in the object sibling transfer of type multiappcopytransfer multiapptransfer execute on flags do not match associated from multi app execute on flags warning the following warning occurred in the object sibling transfer of type multiappcopytransfer multiapptransfer execute on flags do not match associated to multi app execute on flags to summarize transfer wants to execute on every flag no matter what no execute on warning for to multi app execute on timestep end warning for from multi app execute on timestep begin timestep end warning for both from multi app and to multi app impact two major impacts although it does not affect the answer performing transfers on every exec flag could be costly for some transfers the unavoidable warning message is erroneous imo and a test with this type of transfer will always fail because of the warning ,0
445901,31334924305.0,IssuesEvent,2023-08-24 04:50:42,bghira/SimpleTuner,https://api.github.com/repos/bghira/SimpleTuner,closed,Optimizer: Adafactor support for consumer GPUs,documentation enhancement help wanted good first issue,"Khoya's recommended settings for training SDXL on a 24G GPU (eg. a 3090, 4090) rely on the use of the Adafactor optimizer. This is not currently supported by SimpleTuner.
AC:
* Add an option for the use of Adafactor instead of AdamW, AdamW8Bit, Dadapt",1.0,"Optimizer: Adafactor support for consumer GPUs - Khoya's recommended settings for training SDXL on a 24G GPU (eg. a 3090, 4090) rely on the use of the Adafactor optimizer. This is not currently supported by SimpleTuner.
AC:
* Add an option for the use of Adafactor instead of AdamW, AdamW8Bit, Dadapt",0,optimizer adafactor support for consumer gpus khoya s recommended settings for training sdxl on a gpu eg a rely on the use of the adafactor optimizer this is not currently supported by simpletuner ac add an option for the use of adafactor instead of adamw dadapt,0
4836,17693701785.0,IssuesEvent,2021-08-24 13:10:33,CDCgov/prime-field-teams,https://api.github.com/repos/CDCgov/prime-field-teams,reopened,Solution Testing - CSV File Cleanup VBScript,sender-automation,"**Description:**
Maybe an additional parameter to indicate what to do with a file in the Output folder/Arg#1 that already exists in /Processed Folder.
-Scenario #1 - The program exporting the CSV file always names the file the same thing, so if the file exists in /Processed, we must append (1), (2), etc to the fileBase and then process the file. The MoveFile function's target must be modified to include the new fileBase + "".csv""
Scenario #2 - If the CSV file always has a unique file name, and if it already exists in /Processed, we do not want to process it. We must skip it. ",1.0,"Solution Testing - CSV File Cleanup VBScript - **Description:**
Maybe an additional parameter to indicate what to do with a file in the Output folder/Arg#1 that already exists in /Processed Folder.
-Scenario #1 - The program exporting the CSV file always names the file the same thing, so if the file exists in /Processed, we must append (1), (2), etc to the fileBase and then process the file. The MoveFile function's target must be modified to include the new fileBase + "".csv""
Scenario #2 - If the CSV file always has a unique file name, and if it already exists in /Processed, we do not want to process it. We must skip it. ",1,solution testing csv file cleanup vbscript description maybe an additional parameter to indicate what to do with a file in the output folder arg that already exists in processed folder scenario the program exporting the csv file always names the file the same thing so if the file exists in processed we must append etc to the filebase and then process the file the movefile function s target must be modified to include the new filebase csv scenario if the csv file always has a unique file name and if it already exists in processed we do not want to process it we must skip it ,1
8364,26799174301.0,IssuesEvent,2023-02-01 14:02:26,quarkusio/quarkus,https://api.github.com/repos/quarkusio/quarkus,closed,`cancel-previous-runs` GH action is still using deprecated Node.js 12 (only in forked repos),area/housekeeping area/infra-automation,"### Description

https://github.com/famod/quarkus/actions/runs/4047994738
We only use this action in foked repos: https://github.com/quarkusio/quarkus/blob/2.16.0.Final/.github/workflows/ci-actions-incremental.yml#L123-L125
### Implementation ideas
- update the action (hi @n1hility 😉)
- or find a replacement
- or drop it without a replacement (devs need to cancel the run themselves...meh!)",1.0,"`cancel-previous-runs` GH action is still using deprecated Node.js 12 (only in forked repos) - ### Description

https://github.com/famod/quarkus/actions/runs/4047994738
We only use this action in foked repos: https://github.com/quarkusio/quarkus/blob/2.16.0.Final/.github/workflows/ci-actions-incremental.yml#L123-L125
### Implementation ideas
- update the action (hi @n1hility 😉)
- or find a replacement
- or drop it without a replacement (devs need to cancel the run themselves...meh!)",1, cancel previous runs gh action is still using deprecated node js only in forked repos description we only use this action in foked repos implementation ideas update the action hi 😉 or find a replacement or drop it without a replacement devs need to cancel the run themselves meh ,1
5998,21866716926.0,IssuesEvent,2022-05-19 00:13:23,Studio-Ops-Org/Studio-2022-S1-Repo,https://api.github.com/repos/Studio-Ops-Org/Studio-2022-S1-Repo,opened,ARM Template w CSV,Automation,Rob would like a version of the ARM template for OE1/OSC that creates student users based on a CSV file. The CSV file will hold their username and password. Talk to Rob for any further specifications. ,1.0,ARM Template w CSV - Rob would like a version of the ARM template for OE1/OSC that creates student users based on a CSV file. The CSV file will hold their username and password. Talk to Rob for any further specifications. ,1,arm template w csv rob would like a version of the arm template for osc that creates student users based on a csv file the csv file will hold their username and password talk to rob for any further specifications ,1
4024,15185279747.0,IssuesEvent,2021-02-15 10:43:13,elastic/apm-server,https://api.github.com/repos/elastic/apm-server,closed,[CI] Package stage fails,automation bug ci team:automation,"after https://github.com/elastic/apm-server/pull/4695 the stage package is failing, we have to investigate why.
```
./.ci/scripts/package-docker-snapshot.sh e2326063d084e3a019e77734a7f17c32616e4b32 docker.elastic.co/observability-ci/apm-server
...
[2021-02-09T02:47:36.777Z] 2021/02/09 02:47:36 exec: go list -m
[2021-02-09T02:51:43.550Z] >> package: Building apm-server type=docker for platform=linux/amd64
[2021-02-09T02:51:43.551Z] >> package: Building apm-server-oss type=deb for platform=linux/amd64
[2021-02-09T02:51:43.551Z] >> package: Building apm-server-oss type=tar.gz for platform=linux/amd64
[2021-02-09T02:51:43.551Z] >> package: Building apm-server-oss type=rpm for platform=linux/amd64
[2021-02-09T02:51:46.122Z] >> package: Building apm-server type=tar.gz for platform=linux/amd64
[2021-02-09T02:51:46.701Z] Doing `require 'backports'` is deprecated and will not load any backport in the next major release.
[2021-02-09T02:51:46.701Z] Require just the needed backports instead, or 'backports/latest'.
[2021-02-09T02:51:46.966Z] Doing `require 'backports'` is deprecated and will not load any backport in the next major release.
[2021-02-09T02:51:46.967Z] Require just the needed backports instead, or 'backports/latest'.
[2021-02-09T02:51:57.029Z] >> package: Building apm-server-oss type=docker for platform=linux/amd64
[2021-02-09T02:51:57.029Z] >> package: Building apm-server type=deb for platform=linux/amd64
[2021-02-09T02:52:09.373Z] Doing `require 'backports'` is deprecated and will not load any backport in the next major release.
[2021-02-09T02:52:09.373Z] Require just the needed backports instead, or 'backports/latest'.
[2021-02-09T02:52:11.308Z] >> package: Building apm-server type=rpm for platform=linux/amd64
[2021-02-09T02:52:23.607Z] Doing `require 'backports'` is deprecated and will not load any backport in the next major release.
[2021-02-09T02:52:23.607Z] Require just the needed backports instead, or 'backports/latest'.
[2021-02-09T02:52:50.305Z] >> Testing package contents
[2021-02-09T02:53:12.310Z] package ran for 5m44.920545464s
[2021-02-09T02:53:12.310Z] INFO: Get the just built docker image
[2021-02-09T02:53:12.310Z] INFO: Retag docker image (docker.elastic.co/apm/apm-server:)
[2021-02-09T02:53:12.310Z] Error parsing reference: ""docker.elastic.co/apm/apm-server:"" is not a valid repository/tag: invalid reference format
```",2.0,"[CI] Package stage fails - after https://github.com/elastic/apm-server/pull/4695 the stage package is failing, we have to investigate why.
```
./.ci/scripts/package-docker-snapshot.sh e2326063d084e3a019e77734a7f17c32616e4b32 docker.elastic.co/observability-ci/apm-server
...
[2021-02-09T02:47:36.777Z] 2021/02/09 02:47:36 exec: go list -m
[2021-02-09T02:51:43.550Z] >> package: Building apm-server type=docker for platform=linux/amd64
[2021-02-09T02:51:43.551Z] >> package: Building apm-server-oss type=deb for platform=linux/amd64
[2021-02-09T02:51:43.551Z] >> package: Building apm-server-oss type=tar.gz for platform=linux/amd64
[2021-02-09T02:51:43.551Z] >> package: Building apm-server-oss type=rpm for platform=linux/amd64
[2021-02-09T02:51:46.122Z] >> package: Building apm-server type=tar.gz for platform=linux/amd64
[2021-02-09T02:51:46.701Z] Doing `require 'backports'` is deprecated and will not load any backport in the next major release.
[2021-02-09T02:51:46.701Z] Require just the needed backports instead, or 'backports/latest'.
[2021-02-09T02:51:46.966Z] Doing `require 'backports'` is deprecated and will not load any backport in the next major release.
[2021-02-09T02:51:46.967Z] Require just the needed backports instead, or 'backports/latest'.
[2021-02-09T02:51:57.029Z] >> package: Building apm-server-oss type=docker for platform=linux/amd64
[2021-02-09T02:51:57.029Z] >> package: Building apm-server type=deb for platform=linux/amd64
[2021-02-09T02:52:09.373Z] Doing `require 'backports'` is deprecated and will not load any backport in the next major release.
[2021-02-09T02:52:09.373Z] Require just the needed backports instead, or 'backports/latest'.
[2021-02-09T02:52:11.308Z] >> package: Building apm-server type=rpm for platform=linux/amd64
[2021-02-09T02:52:23.607Z] Doing `require 'backports'` is deprecated and will not load any backport in the next major release.
[2021-02-09T02:52:23.607Z] Require just the needed backports instead, or 'backports/latest'.
[2021-02-09T02:52:50.305Z] >> Testing package contents
[2021-02-09T02:53:12.310Z] package ran for 5m44.920545464s
[2021-02-09T02:53:12.310Z] INFO: Get the just built docker image
[2021-02-09T02:53:12.310Z] INFO: Retag docker image (docker.elastic.co/apm/apm-server:)
[2021-02-09T02:53:12.310Z] Error parsing reference: ""docker.elastic.co/apm/apm-server:"" is not a valid repository/tag: invalid reference format
```",1, package stage fails after the stage package is failing we have to investigate why ci scripts package docker snapshot sh docker elastic co observability ci apm server exec go list m package building apm server type docker for platform linux package building apm server oss type deb for platform linux package building apm server oss type tar gz for platform linux package building apm server oss type rpm for platform linux package building apm server type tar gz for platform linux doing require backports is deprecated and will not load any backport in the next major release require just the needed backports instead or backports latest doing require backports is deprecated and will not load any backport in the next major release require just the needed backports instead or backports latest package building apm server oss type docker for platform linux package building apm server type deb for platform linux doing require backports is deprecated and will not load any backport in the next major release require just the needed backports instead or backports latest package building apm server type rpm for platform linux doing require backports is deprecated and will not load any backport in the next major release require just the needed backports instead or backports latest testing package contents package ran for info get the just built docker image info retag docker image docker elastic co apm apm server error parsing reference docker elastic co apm apm server is not a valid repository tag invalid reference format ,1
51852,12819860741.0,IssuesEvent,2020-07-06 03:46:45,ballerina-platform/ballerina-lang,https://api.github.com/repos/ballerina-platform/ballerina-lang,closed,ballerina: unknown command 'dist',Area/BuildTools,"os:win10
why???
```
ballerina -v
jBallerina 1.2.2
Language specification 2020R1
```
```
ballerina dist update
ballerina: unknown command 'dist'
Run 'ballerina help' for usage.
```",1.0,"ballerina: unknown command 'dist' - os:win10
why???
```
ballerina -v
jBallerina 1.2.2
Language specification 2020R1
```
```
ballerina dist update
ballerina: unknown command 'dist'
Run 'ballerina help' for usage.
```",0,ballerina unknown command dist os why ballerina v jballerina language specification ballerina dist update ballerina unknown command dist run ballerina help for usage ,0
382565,26504553097.0,IssuesEvent,2023-01-18 12:55:31,activepieces/activepieces,https://api.github.com/repos/activepieces/activepieces,closed,Pieces Framework Reference,documentation,"## Table of Contents
- Property Types
- Action Reference
- Trigger Reference
- How to Create Triggers and Types
- Basic explanation of OAuth2
",1.0,"Pieces Framework Reference - ## Table of Contents
- Property Types
- Action Reference
- Trigger Reference
- How to Create Triggers and Types
- Basic explanation of OAuth2
",0,pieces framework reference table of contents property types action reference trigger reference how to create triggers and types basic explanation of ,0
325783,24061352116.0,IssuesEvent,2022-09-16 23:23:39,dafny-lang/compiler-bootstrap,https://api.github.com/repos/dafny-lang/compiler-bootstrap,closed,Add documentation for auditor,documentation,"To document the use of the auditor tool, let's add a `README.rst` in the `src/Tools/Auditor` directory with a link from the top-level `README.rst`.",1.0,"Add documentation for auditor - To document the use of the auditor tool, let's add a `README.rst` in the `src/Tools/Auditor` directory with a link from the top-level `README.rst`.",0,add documentation for auditor to document the use of the auditor tool let s add a readme rst in the src tools auditor directory with a link from the top level readme rst ,0
6709,23770460387.0,IssuesEvent,2022-09-01 15:52:53,kedacore/keda,https://api.github.com/repos/kedacore/keda,closed,Use re-usable workflows for GitHub Actions,enhancement help wanted cant-touch-this automation,"Use re-usable workflows for GitHub Actions to remove the duplication that we have, for example for [doing e2e-tests](https://github.com/kedacore/keda/pull/2568).",1.0,"Use re-usable workflows for GitHub Actions - Use re-usable workflows for GitHub Actions to remove the duplication that we have, for example for [doing e2e-tests](https://github.com/kedacore/keda/pull/2568).",1,use re usable workflows for github actions use re usable workflows for github actions to remove the duplication that we have for example for ,1
23405,16111036724.0,IssuesEvent,2021-04-27 21:14:45,APSIMInitiative/ApsimX,https://api.github.com/repos/APSIMInitiative/ApsimX,closed,Properties in map component overlap map in gtk2 version,bug interface/infrastructure,"In the gtk2 build, the properties in the map component overlap the actual map area.",1.0,"Properties in map component overlap map in gtk2 version - In the gtk2 build, the properties in the map component overlap the actual map area.",0,properties in map component overlap map in version in the build the properties in the map component overlap the actual map area ,0
2323,11770750916.0,IssuesEvent,2020-03-15 20:38:04,matchID-project/deces-ui,https://api.github.com/repos/matchID-project/deces-ui,closed,Change method for versionning docker image using pertinent code only and tagging versions with branch,Automation,"APP_VERSION should only consider an hash of
- js code
- docker environment
- maybe Makefile
But must exclude travis and branche merge for example - an already build and published docker image should not be rebuilt and note republished.
Explore it using something like :
git describe $(git log -n 1 --format=%H -- path/to/subfolder).
Moreover a docker tag should be put after successfull tests only, as a proof of good deployability.
A docker version would then just be built once. Reducing from 1""30 each merging process.
PR could test locally and publish version so that dev focus on remote deployment. May be 1 min less for local testing in dev and master branches.
Finally - a rebuild of a version in travis for dev and master branches should consider caching on sucessfull build like docker image. And skip ditectly to deployment for accelaration then publish tags.
A dev merge could pass to 4 minutes.
And a master to 7 minutes.
All this may incude most changes in matchid/tools
",1.0,"Change method for versionning docker image using pertinent code only and tagging versions with branch - APP_VERSION should only consider an hash of
- js code
- docker environment
- maybe Makefile
But must exclude travis and branche merge for example - an already build and published docker image should not be rebuilt and note republished.
Explore it using something like :
git describe $(git log -n 1 --format=%H -- path/to/subfolder).
Moreover a docker tag should be put after successfull tests only, as a proof of good deployability.
A docker version would then just be built once. Reducing from 1""30 each merging process.
PR could test locally and publish version so that dev focus on remote deployment. May be 1 min less for local testing in dev and master branches.
Finally - a rebuild of a version in travis for dev and master branches should consider caching on sucessfull build like docker image. And skip ditectly to deployment for accelaration then publish tags.
A dev merge could pass to 4 minutes.
And a master to 7 minutes.
All this may incude most changes in matchid/tools
",1,change method for versionning docker image using pertinent code only and tagging versions with branch app version should only consider an hash of js code docker environment maybe makefile but must exclude travis and branche merge for example an already build and published docker image should not be rebuilt and note republished explore it using something like git describe git log n format h path to subfolder moreover a docker tag should be put after successfull tests only as a proof of good deployability a docker version would then just be built once reducing from each merging process pr could test locally and publish version so that dev focus on remote deployment may be min less for local testing in dev and master branches finally a rebuild of a version in travis for dev and master branches should consider caching on sucessfull build like docker image and skip ditectly to deployment for accelaration then publish tags a dev merge could pass to minutes and a master to minutes all this may incude most changes in matchid tools ,1
4727,17359791902.0,IssuesEvent,2021-07-29 18:52:26,BCDevOps/developer-experience,https://api.github.com/repos/BCDevOps/developer-experience,opened,Create an RocketChat channel for Advanced Solution's incident ticket. - RC- ServiceNow integration - ,enhancement env/prod rocketchat team/DXC tech/automation,"**Describe the issue**
Currently `# testing-for-integration` is set up for RC-SNOW integration testing. This will move to a prod channel once information below are provided.
**Definition of done**
- [ ] new channel name
- [ ] list of people who can access the channel
- [ ] set up the channel using integration scripts
**Additional context**
- Test integration channel: https://chat.developer.gov.bc.ca/group/testing-for-integration
- Document for [Integrate RocketChat to AdvSol ServiceNow](https://github.com/bcgov-c/platform-ops/tree/ocp4-base/tools/rocketchat-servicenow#integrate-rocketchat-to-advsol-servicenow)
",1.0,"Create an RocketChat channel for Advanced Solution's incident ticket. - RC- ServiceNow integration - - **Describe the issue**
Currently `# testing-for-integration` is set up for RC-SNOW integration testing. This will move to a prod channel once information below are provided.
**Definition of done**
- [ ] new channel name
- [ ] list of people who can access the channel
- [ ] set up the channel using integration scripts
**Additional context**
- Test integration channel: https://chat.developer.gov.bc.ca/group/testing-for-integration
- Document for [Integrate RocketChat to AdvSol ServiceNow](https://github.com/bcgov-c/platform-ops/tree/ocp4-base/tools/rocketchat-servicenow#integrate-rocketchat-to-advsol-servicenow)
",1,create an rocketchat channel for advanced solution s incident ticket rc servicenow integration describe the issue currently testing for integration is set up for rc snow integration testing this will move to a prod channel once information below are provided definition of done new channel name list of people who can access the channel set up the channel using integration scripts additional context test integration channel document for ,1
179853,6630553067.0,IssuesEvent,2017-09-25 00:07:19,FACG2/wrap,https://api.github.com/repos/FACG2/wrap,closed,database: change state name in state table to unique,bug priority-3 technical,database: change state name in state table to unique,1.0,database: change state name in state table to unique - database: change state name in state table to unique,0,database change state name in state table to unique database change state name in state table to unique,0
721946,24844385618.0,IssuesEvent,2022-10-26 14:52:07,AY2223S1-CS2103T-T12-1/tp,https://api.github.com/repos/AY2223S1-CS2103T-T12-1/tp,closed,"As a new TA and new user to the system, I can lookup the user guide to better understand what I can do with the system",type.Story priority.HIGH,...so that I can make the most out of the system to ease my TA experience.,1.0,"As a new TA and new user to the system, I can lookup the user guide to better understand what I can do with the system - ...so that I can make the most out of the system to ease my TA experience.",0,as a new ta and new user to the system i can lookup the user guide to better understand what i can do with the system so that i can make the most out of the system to ease my ta experience ,0
25123,4146856586.0,IssuesEvent,2016-06-15 02:47:31,steedos/apps,https://api.github.com/repos/steedos/apps,closed,新建表单,修改表单内容,根据条件判断后,下一步骤名会变,处理人不会变。审批节点的岗位处理人是自动显示的,但是填写节点的处理人属性是审批时指定人员。,fix:Done test:OK type:bug,"下一步骤:审批

下一步骤:填写

",1.0,"新建表单,修改表单内容,根据条件判断后,下一步骤名会变,处理人不会变。审批节点的岗位处理人是自动显示的,但是填写节点的处理人属性是审批时指定人员。 - 下一步骤:审批

下一步骤:填写

",0,新建表单,修改表单内容,根据条件判断后,下一步骤名会变,处理人不会变。审批节点的岗位处理人是自动显示的,但是填写节点的处理人属性是审批时指定人员。 下一步骤:审批 下一步骤:填写 ,0
7783,25599037493.0,IssuesEvent,2022-12-01 18:28:50,shellebusch2/DSAllo,https://api.github.com/repos/shellebusch2/DSAllo,closed,Refactor automation solution in Make,enhancement feature-automation,"There needs to be a more efficient way to automate form fields — one that does not use manually-entered if statements and one that can dynamically scale without any manual adjustments when new entries are created on config databases (i.e. ALLO opens a new market)
Brian mentioned a solution that involved
- pulling entries from config database
- putting them in an array
- calling that array when mapping/automating make field in the complaint database
Although further discussion needs to be had with Brian. ",1.0,"Refactor automation solution in Make - There needs to be a more efficient way to automate form fields — one that does not use manually-entered if statements and one that can dynamically scale without any manual adjustments when new entries are created on config databases (i.e. ALLO opens a new market)
Brian mentioned a solution that involved
- pulling entries from config database
- putting them in an array
- calling that array when mapping/automating make field in the complaint database
Although further discussion needs to be had with Brian. ",1,refactor automation solution in make there needs to be a more efficient way to automate form fields — one that does not use manually entered if statements and one that can dynamically scale without any manual adjustments when new entries are created on config databases i e allo opens a new market brian mentioned a solution that involved pulling entries from config database putting them in an array calling that array when mapping automating make field in the complaint database although further discussion needs to be had with brian ,1
8024,26125205743.0,IssuesEvent,2022-12-28 17:30:37,bcgov/api-services-portal,https://api.github.com/repos/bcgov/api-services-portal,closed,Cypress Test -Keycloak Migration,automation,"- [ ] Update existing test as per changes to identify user (email id instead of user name)
- [ ] Create scenario for user migration
1. Assign Access to existing user Spec
1.1 authenticates Janis (api owner)
1.2 Navigate to Namespace Access Page
1.3 Grant namespace access to Old User
2. Authernticate with old user to initiate migration
2.1 authenticates with old user
3. Verify that permission of old user is migrated to new user
3.1 authenticates with new user
3.2 Get the permission of the user
3.3 Verify that new user scopes are same as permissions given to old users
4. Verify that old user is no longer able to sign in
4.1 authenticates with old user
4.2 Verify that user account is disabled
",1.0,"Cypress Test -Keycloak Migration - - [ ] Update existing test as per changes to identify user (email id instead of user name)
- [ ] Create scenario for user migration
1. Assign Access to existing user Spec
1.1 authenticates Janis (api owner)
1.2 Navigate to Namespace Access Page
1.3 Grant namespace access to Old User
2. Authernticate with old user to initiate migration
2.1 authenticates with old user
3. Verify that permission of old user is migrated to new user
3.1 authenticates with new user
3.2 Get the permission of the user
3.3 Verify that new user scopes are same as permissions given to old users
4. Verify that old user is no longer able to sign in
4.1 authenticates with old user
4.2 Verify that user account is disabled
",1,cypress test keycloak migration update existing test as per changes to identify user email id instead of user name create scenario for user migration assign access to existing user spec authenticates janis api owner navigate to namespace access page grant namespace access to old user authernticate with old user to initiate migration authenticates with old user verify that permission of old user is migrated to new user authenticates with new user get the permission of the user verify that new user scopes are same as permissions given to old users verify that old user is no longer able to sign in authenticates with old user verify that user account is disabled ,1
163452,6198589631.0,IssuesEvent,2017-07-05 19:31:51,GoogleCloudPlatform/google-cloud-node,https://api.github.com/repos/GoogleCloudPlatform/google-cloud-node,closed,spanner: Unable to insert multiple rows in a single table.insert/transaction.insert call when some rows lack a value for nullable columns,api: spanner priority: p0 type: bug,"In [transaction-request.js](https://github.com/GoogleCloudPlatform/google-cloud-node/blob/ca0d96cb94f7ef4e176edf221f9ee154dcf8c850/packages/spanner/src/transaction-request.js#L687-L695), there is the following (L687-695):
```javascript
mutation[method] = {
table: table,
columns: Object.keys(keyVals[0]),
values: keyVals.map(function(keyVal) {
return Object.keys(keyVal).map(function(key) {
return codec.encode(keyVal[key]);
});
})
};
```
There are at least two bugs here:
1. When an array is provided to table.insert(), if some objects do not have a given key (which is perfectly valid if the column said key corresponds to is nullable), the array within `values` for this object will not be the same length as the columns array, and indices in `columns` will no longer correspond to the correct value in the `values` array for this row/object. This results in (best case) misleading error messages from Spanner about incorrect data types. Worse still, if it so happens that the value in the `values` array for this row and incorrect column index is the same type as the value for the correct column index, the insert request will succeed, and end up storing data in the wrong column entirely.
2. If a consumer of this library works around the issue above by populating missing nullable keys with `null`, as the code above relies on the order of values returned by Object.keys, the newly added values will be at the end of the array returned by Object.keys (which returns keys in insertion order), and again, the indices between the `columns` array, and the `values` array for affected rows will no longer align, with the same end result as what occurs with issue 1 above.
This could be fixed by iterating over all rows (rather than naively assuming all rows have identical keys to the first row) and collecting all unique keys into a single array which is subsequently sorted lexically, to build the value for `columns`. Likewise, the value for `values` would be constructed by mapping over `columns` (rather than `Object.keys(keyVal)`) and pulling out the corresponding value from `keyVal` to pass to `codec.encode`.",1.0,"spanner: Unable to insert multiple rows in a single table.insert/transaction.insert call when some rows lack a value for nullable columns - In [transaction-request.js](https://github.com/GoogleCloudPlatform/google-cloud-node/blob/ca0d96cb94f7ef4e176edf221f9ee154dcf8c850/packages/spanner/src/transaction-request.js#L687-L695), there is the following (L687-695):
```javascript
mutation[method] = {
table: table,
columns: Object.keys(keyVals[0]),
values: keyVals.map(function(keyVal) {
return Object.keys(keyVal).map(function(key) {
return codec.encode(keyVal[key]);
});
})
};
```
There are at least two bugs here:
1. When an array is provided to table.insert(), if some objects do not have a given key (which is perfectly valid if the column said key corresponds to is nullable), the array within `values` for this object will not be the same length as the columns array, and indices in `columns` will no longer correspond to the correct value in the `values` array for this row/object. This results in (best case) misleading error messages from Spanner about incorrect data types. Worse still, if it so happens that the value in the `values` array for this row and incorrect column index is the same type as the value for the correct column index, the insert request will succeed, and end up storing data in the wrong column entirely.
2. If a consumer of this library works around the issue above by populating missing nullable keys with `null`, as the code above relies on the order of values returned by Object.keys, the newly added values will be at the end of the array returned by Object.keys (which returns keys in insertion order), and again, the indices between the `columns` array, and the `values` array for affected rows will no longer align, with the same end result as what occurs with issue 1 above.
This could be fixed by iterating over all rows (rather than naively assuming all rows have identical keys to the first row) and collecting all unique keys into a single array which is subsequently sorted lexically, to build the value for `columns`. Likewise, the value for `values` would be constructed by mapping over `columns` (rather than `Object.keys(keyVal)`) and pulling out the corresponding value from `keyVal` to pass to `codec.encode`.",0,spanner unable to insert multiple rows in a single table insert transaction insert call when some rows lack a value for nullable columns in there is the following javascript mutation table table columns object keys keyvals values keyvals map function keyval return object keys keyval map function key return codec encode keyval there are at least two bugs here when an array is provided to table insert if some objects do not have a given key which is perfectly valid if the column said key corresponds to is nullable the array within values for this object will not be the same length as the columns array and indices in columns will no longer correspond to the correct value in the values array for this row object this results in best case misleading error messages from spanner about incorrect data types worse still if it so happens that the value in the values array for this row and incorrect column index is the same type as the value for the correct column index the insert request will succeed and end up storing data in the wrong column entirely if a consumer of this library works around the issue above by populating missing nullable keys with null as the code above relies on the order of values returned by object keys the newly added values will be at the end of the array returned by object keys which returns keys in insertion order and again the indices between the columns array and the values array for affected rows will no longer align with the same end result as what occurs with issue above this could be fixed by iterating over all rows rather than naively assuming all rows have identical keys to the first row and collecting all unique keys into a single array which is subsequently sorted lexically to build the value for columns likewise the value for values would be constructed by mapping over columns rather than object keys keyval and pulling out the corresponding value from keyval to pass to codec encode ,0
8068,26149082932.0,IssuesEvent,2022-12-30 10:38:02,dyne/frei0r,https://api.github.com/repos/dyne/frei0r,opened,Plugin fuzzing test,automation,"Include a test for fuzzing the input of plugins and loading them one-by-one with a test video source and changing every parameter with all possible values (perhaps also beyond the defined scope) to test their stability.
## Links
- [Info on fuzzing](https://www.code-intelligence.com/blog/secure-coding-cpp-using-fuzzing)
- Best fuzzing tool for our use case may be [cifuzz](https://github.com/CodeIntelligenceTesting/cifuzz)",1.0,"Plugin fuzzing test - Include a test for fuzzing the input of plugins and loading them one-by-one with a test video source and changing every parameter with all possible values (perhaps also beyond the defined scope) to test their stability.
## Links
- [Info on fuzzing](https://www.code-intelligence.com/blog/secure-coding-cpp-using-fuzzing)
- Best fuzzing tool for our use case may be [cifuzz](https://github.com/CodeIntelligenceTesting/cifuzz)",1,plugin fuzzing test include a test for fuzzing the input of plugins and loading them one by one with a test video source and changing every parameter with all possible values perhaps also beyond the defined scope to test their stability links best fuzzing tool for our use case may be ,1
7686,25451838388.0,IssuesEvent,2022-11-24 11:03:49,Budibase/budibase,https://api.github.com/repos/Budibase/budibase,closed,Query Rows does not work with bindings when using a MySQL datasource,bug binding automations sev2 - severe filtering,"**Hosting**
- Self
- Method: BudiCLI
- Budibase Version: 1.0.178
- App Version: 1.0.178
**Describe the bug**
Query Rows does not work with bindings when using a MySQL datasource. Instead it returns all rows until the row limit is hit.
**To Reproduce**
Steps to reproduce the behavior:
1. Create a new app with a MySQL data source
2. Create two tables Record and RecordType
3. Create two columns in RecordType: Type/Text , Value/Number
4. Create at least two rows in the with both columns filled with different values in the RecordType table
5. Create the following relationsip between the tables: One RecordType row > many Record rows
6. Create a column in the Record table called Value and make it a number type
7. Create an automation to launch when creating a row
8. Create a Query Rows step in the automation to query the RecordType table and filter it by the foreign key from the trigger row
9. Create an Update Rows step in the automation to update the Record Table
10. Set the RecordType field to `return $(""trigger.row.RecordType"");`
11. Set the Fk_RecordType_Record field to `return $(""trigger.row.fk_RecordType_Record"");`
12. Set the Value field to `return $(""steps.1.rows.0.Value"");`
13. Set the RowID field to `return $(""trigger.id"");`
14. Press Run test at the top of the automation page and select anything but the first record in the RecordType table created earlier
15. The first value will have been pulled through instead of the correct value.
Query Row Input:
`
{
""tableId"": ""datasource_plus_fc2c66277ba54f2caac0dc36d5efc531__RecordType"",
""filters"": {
""string"": {},
""fuzzy"": {},
""range"": {},
""equal"": {
""id"": null
},
""notEqual"": {},
""empty"": {},
""notEmpty"": {},
""contains"": {},
""notContains"": {}
},
""filters-def"": [
{
""id"": ""Aa5fR_PiF"",
""field"": ""id"",
""operator"": ""equal"",
""value"": ""{{ js \""cmV0dXJuICQoInRyaWdnZXIucm93LmZrX1JlY29yZFR5cGVfUmVjb3JkIik7\"" }}"",
""valueType"": ""Binding"",
""type"": ""number""
}
],
""sortColumn"": ""id"",
""sortOrder"": ""ascending"",
""limit"": ""50""
}
`
Query Row Output:
`
{
""rows"": [
{
""id"": 1,
""Type"": ""Type1"",
""Value"": 1,
""_id"": ""%5B1%5D"",
""tableId"": ""datasource_plus_fc2c66277ba54f2caac0dc36d5efc531__RecordType"",
""_rev"": ""rev""
},
{
""id"": 2,
""Type"": ""Type2"",
""Value"": 2,
""_id"": ""%5B2%5D"",
""tableId"": ""datasource_plus_fc2c66277ba54f2caac0dc36d5efc531__RecordType"",
""_rev"": ""rev""
},
{
""id"": 3,
""Type"": ""Type3"",
""Value"": 3,
""_id"": ""%5B3%5D"",
""tableId"": ""datasource_plus_fc2c66277ba54f2caac0dc36d5efc531__RecordType"",
""_rev"": ""rev"",
""Record"": [
{
""_id"": ""%5B18%5D""
},
{
""_id"": ""%5B19%5D""
},
{
""_id"": ""%5B20%5D""
},
{
""_id"": ""%5B21%5D""
}
]
}
],
""success"": true
}
`
Update Row Output:
`
{
""row"": {
""id"": 22,
""fk_RecordType_Record"": 3,
""Value"": 1,
""_id"": ""%5B22%5D"",
""tableId"": ""datasource_plus_fc2c66277ba54f2caac0dc36d5efc531__Record"",
""_rev"": ""rev"",
""RecordType"": [
{
""_id"": ""%5B22%5D""
}
]
},
""response"": ""Record saved successfully"",
""id"": ""%5B22%5D"",
""revision"": ""rev"",
""success"": true
}
`
**Expected behavior**
The correct values have been filtered to the expected position. e.g. in the outputs above the expected value is 3 instead of 1
**Desktop (please complete the following information):**
- OS: Windows 10
- Browser: Chrome
- Version: 101.0.4951.67
**Additional context**
Both handlebars and javascript bindings have this issue
",1.0,"Query Rows does not work with bindings when using a MySQL datasource - **Hosting**
- Self
- Method: BudiCLI
- Budibase Version: 1.0.178
- App Version: 1.0.178
**Describe the bug**
Query Rows does not work with bindings when using a MySQL datasource. Instead it returns all rows until the row limit is hit.
**To Reproduce**
Steps to reproduce the behavior:
1. Create a new app with a MySQL data source
2. Create two tables Record and RecordType
3. Create two columns in RecordType: Type/Text , Value/Number
4. Create at least two rows in the with both columns filled with different values in the RecordType table
5. Create the following relationsip between the tables: One RecordType row > many Record rows
6. Create a column in the Record table called Value and make it a number type
7. Create an automation to launch when creating a row
8. Create a Query Rows step in the automation to query the RecordType table and filter it by the foreign key from the trigger row
9. Create an Update Rows step in the automation to update the Record Table
10. Set the RecordType field to `return $(""trigger.row.RecordType"");`
11. Set the Fk_RecordType_Record field to `return $(""trigger.row.fk_RecordType_Record"");`
12. Set the Value field to `return $(""steps.1.rows.0.Value"");`
13. Set the RowID field to `return $(""trigger.id"");`
14. Press Run test at the top of the automation page and select anything but the first record in the RecordType table created earlier
15. The first value will have been pulled through instead of the correct value.
Query Row Input:
`
{
""tableId"": ""datasource_plus_fc2c66277ba54f2caac0dc36d5efc531__RecordType"",
""filters"": {
""string"": {},
""fuzzy"": {},
""range"": {},
""equal"": {
""id"": null
},
""notEqual"": {},
""empty"": {},
""notEmpty"": {},
""contains"": {},
""notContains"": {}
},
""filters-def"": [
{
""id"": ""Aa5fR_PiF"",
""field"": ""id"",
""operator"": ""equal"",
""value"": ""{{ js \""cmV0dXJuICQoInRyaWdnZXIucm93LmZrX1JlY29yZFR5cGVfUmVjb3JkIik7\"" }}"",
""valueType"": ""Binding"",
""type"": ""number""
}
],
""sortColumn"": ""id"",
""sortOrder"": ""ascending"",
""limit"": ""50""
}
`
Query Row Output:
`
{
""rows"": [
{
""id"": 1,
""Type"": ""Type1"",
""Value"": 1,
""_id"": ""%5B1%5D"",
""tableId"": ""datasource_plus_fc2c66277ba54f2caac0dc36d5efc531__RecordType"",
""_rev"": ""rev""
},
{
""id"": 2,
""Type"": ""Type2"",
""Value"": 2,
""_id"": ""%5B2%5D"",
""tableId"": ""datasource_plus_fc2c66277ba54f2caac0dc36d5efc531__RecordType"",
""_rev"": ""rev""
},
{
""id"": 3,
""Type"": ""Type3"",
""Value"": 3,
""_id"": ""%5B3%5D"",
""tableId"": ""datasource_plus_fc2c66277ba54f2caac0dc36d5efc531__RecordType"",
""_rev"": ""rev"",
""Record"": [
{
""_id"": ""%5B18%5D""
},
{
""_id"": ""%5B19%5D""
},
{
""_id"": ""%5B20%5D""
},
{
""_id"": ""%5B21%5D""
}
]
}
],
""success"": true
}
`
Update Row Output:
`
{
""row"": {
""id"": 22,
""fk_RecordType_Record"": 3,
""Value"": 1,
""_id"": ""%5B22%5D"",
""tableId"": ""datasource_plus_fc2c66277ba54f2caac0dc36d5efc531__Record"",
""_rev"": ""rev"",
""RecordType"": [
{
""_id"": ""%5B22%5D""
}
]
},
""response"": ""Record saved successfully"",
""id"": ""%5B22%5D"",
""revision"": ""rev"",
""success"": true
}
`
**Expected behavior**
The correct values have been filtered to the expected position. e.g. in the outputs above the expected value is 3 instead of 1
**Desktop (please complete the following information):**
- OS: Windows 10
- Browser: Chrome
- Version: 101.0.4951.67
**Additional context**
Both handlebars and javascript bindings have this issue
",1,query rows does not work with bindings when using a mysql datasource hosting self method budicli budibase version app version describe the bug query rows does not work with bindings when using a mysql datasource instead it returns all rows until the row limit is hit to reproduce steps to reproduce the behavior create a new app with a mysql data source create two tables record and recordtype create two columns in recordtype type text value number create at least two rows in the with both columns filled with different values in the recordtype table create the following relationsip between the tables one recordtype row many record rows create a column in the record table called value and make it a number type create an automation to launch when creating a row create a query rows step in the automation to query the recordtype table and filter it by the foreign key from the trigger row create an update rows step in the automation to update the record table set the recordtype field to return trigger row recordtype set the fk recordtype record field to return trigger row fk recordtype record set the value field to return steps rows value set the rowid field to return trigger id press run test at the top of the automation page and select anything but the first record in the recordtype table created earlier the first value will have been pulled through instead of the correct value query row input tableid datasource plus recordtype filters string fuzzy range equal id null notequal empty notempty contains notcontains filters def id pif field id operator equal value js valuetype binding type number sortcolumn id sortorder ascending limit query row output rows id type value id tableid datasource plus recordtype rev rev id type value id tableid datasource plus recordtype rev rev id type value id tableid datasource plus recordtype rev rev record id id id id success true update row output row id fk recordtype record value id tableid datasource plus record rev rev recordtype id response record saved successfully id revision rev success true expected behavior the correct values have been filtered to the expected position e g in the outputs above the expected value is instead of desktop please complete the following information os windows browser chrome version additional context both handlebars and javascript bindings have this issue ,1
391004,11567200810.0,IssuesEvent,2020-02-20 13:55:16,bisq-network/bisq,https://api.github.com/repos/bisq-network/bisq,closed,Message state not updated properly during arbitration,a:bug in:arbitration is:critical bug is:priority,"When the arbitrator is offline during opening of a dispute the initial message state will stay ""Message saved in receiver's mailbox"" forever. Should be ""Message arrival confirmed by receiver"" after arbitrator is online again.
",1.0,"Message state not updated properly during arbitration - When the arbitrator is offline during opening of a dispute the initial message state will stay ""Message saved in receiver's mailbox"" forever. Should be ""Message arrival confirmed by receiver"" after arbitrator is online again.
",0,message state not updated properly during arbitration when the arbitrator is offline during opening of a dispute the initial message state will stay message saved in receiver s mailbox forever should be message arrival confirmed by receiver after arbitrator is online again ,0
414102,12099035620.0,IssuesEvent,2020-04-20 11:25:03,hotosm/tasking-manager,https://api.github.com/repos/hotosm/tasking-manager,closed,Show loading progress for login delay,Component: Frontend Priority: Low,"While logging in, there is a slight delay for the authentication, during which I can move around and click on the page for other actions. Ideally, I expect the page preventing further action till the login shows some result.

",1.0,"Show loading progress for login delay - While logging in, there is a slight delay for the authentication, during which I can move around and click on the page for other actions. Ideally, I expect the page preventing further action till the login shows some result.

",0,show loading progress for login delay while logging in there is a slight delay for the authentication during which i can move around and click on the page for other actions ideally i expect the page preventing further action till the login shows some result ,0
3270,13305817996.0,IssuesEvent,2020-08-25 19:10:30,pulumi/pulumi,https://api.github.com/repos/pulumi/pulumi,opened,Add missing methods to Stack/Workspace,area/automation-api,"The Automation API is missing a few methods `pulumi cancel`, and import/export just to name a few. We should do a sweep to find missing methods and add them where appropriate.",1.0,"Add missing methods to Stack/Workspace - The Automation API is missing a few methods `pulumi cancel`, and import/export just to name a few. We should do a sweep to find missing methods and add them where appropriate.",1,add missing methods to stack workspace the automation api is missing a few methods pulumi cancel and import export just to name a few we should do a sweep to find missing methods and add them where appropriate ,1
10268,32061687617.0,IssuesEvent,2023-09-24 18:28:14,yugabyte/yugabyte-db,https://api.github.com/repos/yugabyte/yugabyte-db,opened,"[YSQL][xCluster] Alter table with add colum, Default value NOW() shows discrepancy in data in xcluster setup",kind/bug area/ysql QA xCluster status/awaiting-triage qa_automation,"### Description
Version: observed on master
Observed a discrepancy in the values of the generated_at column between the source and target databases. The value at the source side is different from the value at the target side.
### Steps:
1. Create table at both source
`CREATE TABLE tab(id int, name text)`
2. Restore database at target
3. Setup replication
4. ALTER table with default value at source
`ALTER TABLE tab ADD COLUMN generated_at timestamp DEFAULT NOW()`
5. Load data to table tab at source (REPLICATION is paused)
6. ALTER table with default value at Target
`ALTER TABLE tab ADD COLUMN generated_at timestamp DEFAULT NOW()`
7. Wait for few mins data to replicate
**Issue**: Value of generated_at column at source is different from value at Target
### Warning: Please confirm that this issue does not contain any sensitive information
- [X] I confirm this issue does not contain any sensitive information.",1.0,"[YSQL][xCluster] Alter table with add colum, Default value NOW() shows discrepancy in data in xcluster setup - ### Description
Version: observed on master
Observed a discrepancy in the values of the generated_at column between the source and target databases. The value at the source side is different from the value at the target side.
### Steps:
1. Create table at both source
`CREATE TABLE tab(id int, name text)`
2. Restore database at target
3. Setup replication
4. ALTER table with default value at source
`ALTER TABLE tab ADD COLUMN generated_at timestamp DEFAULT NOW()`
5. Load data to table tab at source (REPLICATION is paused)
6. ALTER table with default value at Target
`ALTER TABLE tab ADD COLUMN generated_at timestamp DEFAULT NOW()`
7. Wait for few mins data to replicate
**Issue**: Value of generated_at column at source is different from value at Target
### Warning: Please confirm that this issue does not contain any sensitive information
- [X] I confirm this issue does not contain any sensitive information.",1, alter table with add colum default value now shows discrepancy in data in xcluster setup description version observed on master observed a discrepancy in the values of the generated at column between the source and target databases the value at the source side is different from the value at the target side steps create table at both source create table tab id int name text restore database at target setup replication alter table with default value at source alter table tab add column generated at timestamp default now load data to table tab at source replication is paused alter table with default value at target alter table tab add column generated at timestamp default now wait for few mins data to replicate issue value of generated at column at source is different from value at target warning please confirm that this issue does not contain any sensitive information i confirm this issue does not contain any sensitive information ,1
909,8678743841.0,IssuesEvent,2018-11-30 20:59:25,MicrosoftDocs/azure-docs,https://api.github.com/repos/MicrosoftDocs/azure-docs,closed,Improvements in Exception list validation,assigned-to-author automation/svc product-feedback triaged,"Hello, I wanted to point out a few inmprovements that can help with the solution. 1) On the Start or Stop action exclude the VM's that are already on a Running or Dealocated power state 2) Don't send an email if there is no action to be taken 3) Why validate twice the exclusion list, once at the start to check if there are any issues with the list and then after processing the resource groups to create the final exclusion list, as for larger environments it can increase significant the processing time. 4) Why not create a parallel execution of the VMs validation and/or the RGs.
$PowerState = (Get-AzureRmVM -ResourceGroupName $vmResource.ResourceGroupName -name $vmResource.Name -Status).statuses[1].code
If ($Action -eq 'Stop' -and $PowerState -ne ""PowerState/deallocated"") {}
-
---
#### Document Details
⚠ *Do not edit this section. It is required for docs.microsoft.com ➟ GitHub issue linking.*
* ID: 225c9d05-83dd-b006-0025-3753f5ab25bf
* Version Independent ID: 9eecef0c-b1cb-1136-faf7-542214492096
* Content: [Start/Stop VMs during off-hours solution](https://docs.microsoft.com/en-us/azure/automation/automation-solution-vm-management#feedback)
* Content Source: [articles/automation/automation-solution-vm-management.md](https://github.com/Microsoft/azure-docs/blob/master/articles/automation/automation-solution-vm-management.md)
* Service: **automation**
* GitHub Login: @georgewallace
* Microsoft Alias: **gwallace**",1.0,"Improvements in Exception list validation - Hello, I wanted to point out a few inmprovements that can help with the solution. 1) On the Start or Stop action exclude the VM's that are already on a Running or Dealocated power state 2) Don't send an email if there is no action to be taken 3) Why validate twice the exclusion list, once at the start to check if there are any issues with the list and then after processing the resource groups to create the final exclusion list, as for larger environments it can increase significant the processing time. 4) Why not create a parallel execution of the VMs validation and/or the RGs.
$PowerState = (Get-AzureRmVM -ResourceGroupName $vmResource.ResourceGroupName -name $vmResource.Name -Status).statuses[1].code
If ($Action -eq 'Stop' -and $PowerState -ne ""PowerState/deallocated"") {}
-
---
#### Document Details
⚠ *Do not edit this section. It is required for docs.microsoft.com ➟ GitHub issue linking.*
* ID: 225c9d05-83dd-b006-0025-3753f5ab25bf
* Version Independent ID: 9eecef0c-b1cb-1136-faf7-542214492096
* Content: [Start/Stop VMs during off-hours solution](https://docs.microsoft.com/en-us/azure/automation/automation-solution-vm-management#feedback)
* Content Source: [articles/automation/automation-solution-vm-management.md](https://github.com/Microsoft/azure-docs/blob/master/articles/automation/automation-solution-vm-management.md)
* Service: **automation**
* GitHub Login: @georgewallace
* Microsoft Alias: **gwallace**",1,improvements in exception list validation hello i wanted to point out a few inmprovements that can help with the solution on the start or stop action exclude the vm s that are already on a running or dealocated power state don t send an email if there is no action to be taken why validate twice the exclusion list once at the start to check if there are any issues with the list and then after processing the resource groups to create the final exclusion list as for larger environments it can increase significant the processing time why not create a parallel execution of the vms validation and or the rgs powerstate get azurermvm resourcegroupname vmresource resourcegroupname name vmresource name status statuses code if action eq stop and powerstate ne powerstate deallocated document details ⚠ do not edit this section it is required for docs microsoft com ➟ github issue linking id version independent id content content source service automation github login georgewallace microsoft alias gwallace ,1
5701,20778536304.0,IssuesEvent,2022-03-16 12:51:06,pingcap/tidb,https://api.github.com/repos/pingcap/tidb,reopened,using MySQL 5.5 and 5.6 clients connecting with a passwordless account to tidb fail,type/bug sig/sql-infra severity/minor found/automation,"## Bug Report
Please answer these questions before submitting your issue. Thanks!
### 1. Minimal reproduce step (Required)
1. download 5.5 and 5.6 version MySQL client
dbdeployer downloads get-unpack mysql-5.5.62.tar.xz
dbdeployer downloads get-unpack mysql-5.6.44.tar.xz
2. create use nopw with no password,
CREATE USER 'nopw'@'%' IDENTIFIED WITH mysql_native_password
3. use 5.5 and 5.6 MySQL client connect to tidb nightly version(v5.5.0-nightly-20220208) with this ""nopw"" user
### 2. What did you expect to see? (Required)
connect successully
### 3. What did you see instead (Required)
root@wkload-0:/upgrade-test# /root/opt/mysql/5.5.62/bin/mysql -u nopw -h tiup-peer -P3390
ERROR 2012 (HY000): Error in server handshake
### 4. What is your TiDB version? (Required)
v5.5.0-nightly-20220208
",1.0,"using MySQL 5.5 and 5.6 clients connecting with a passwordless account to tidb fail - ## Bug Report
Please answer these questions before submitting your issue. Thanks!
### 1. Minimal reproduce step (Required)
1. download 5.5 and 5.6 version MySQL client
dbdeployer downloads get-unpack mysql-5.5.62.tar.xz
dbdeployer downloads get-unpack mysql-5.6.44.tar.xz
2. create use nopw with no password,
CREATE USER 'nopw'@'%' IDENTIFIED WITH mysql_native_password
3. use 5.5 and 5.6 MySQL client connect to tidb nightly version(v5.5.0-nightly-20220208) with this ""nopw"" user
### 2. What did you expect to see? (Required)
connect successully
### 3. What did you see instead (Required)
root@wkload-0:/upgrade-test# /root/opt/mysql/5.5.62/bin/mysql -u nopw -h tiup-peer -P3390
ERROR 2012 (HY000): Error in server handshake
### 4. What is your TiDB version? (Required)
v5.5.0-nightly-20220208
",1,using mysql and clients connecting with a passwordless account to tidb fail bug report please answer these questions before submitting your issue thanks minimal reproduce step required download and version mysql client dbdeployer downloads get unpack mysql tar xz dbdeployer downloads get unpack mysql tar xz create use nopw with no password create user nopw identified with mysql native password use and mysql client connect to tidb nightly version nightly with this nopw user what did you expect to see required connect successully what did you see instead required root wkload upgrade test root opt mysql bin mysql u nopw h tiup peer error error in server handshake what is your tidb version required nightly ,1
33896,6266116819.0,IssuesEvent,2017-07-16 23:13:36,MartinLoeper/KAMP-DSL,https://api.github.com/repos/MartinLoeper/KAMP-DSL,closed,Create an easy installer,documentation enhancement,"As the name says: create an installer using software configurations and project sets.
Update the wiki accordingly.",1.0,"Create an easy installer - As the name says: create an installer using software configurations and project sets.
Update the wiki accordingly.",0,create an easy installer as the name says create an installer using software configurations and project sets update the wiki accordingly ,0
161137,20120415002.0,IssuesEvent,2022-02-08 01:16:43,arohablue/BlockDockServer,https://api.github.com/repos/arohablue/BlockDockServer,closed,CVE-2020-10693 (Medium) detected in hibernate-validator-5.3.5.Final.jar - autoclosed,security vulnerability,"## CVE-2020-10693 - Medium Severity Vulnerability
Vulnerable Library - hibernate-validator-5.3.5.Final.jar
Path to dependency file: /BlockDockServer/build.gradle
Path to vulnerable library: /root/.gradle/caches/modules-2/files-2.1/org.hibernate/hibernate-validator/5.3.5.Final/622a9bcef2eed6d41b5b8e0662c36212009e375/hibernate-validator-5.3.5.Final.jar
A flaw was found in Hibernate Validator version 6.1.2.Final. A bug in the message interpolation processor enables invalid EL expressions to be evaluated as if they were valid. This flaw allows attackers to bypass input sanitation (escaping, stripping) controls that developers may have put in place when handling user-controlled data in error messages.
***
Step up your Open Source Security Game with WhiteSource [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)",True,"CVE-2020-10693 (Medium) detected in hibernate-validator-5.3.5.Final.jar - autoclosed - ## CVE-2020-10693 - Medium Severity Vulnerability
Vulnerable Library - hibernate-validator-5.3.5.Final.jar
Path to dependency file: /BlockDockServer/build.gradle
Path to vulnerable library: /root/.gradle/caches/modules-2/files-2.1/org.hibernate/hibernate-validator/5.3.5.Final/622a9bcef2eed6d41b5b8e0662c36212009e375/hibernate-validator-5.3.5.Final.jar
A flaw was found in Hibernate Validator version 6.1.2.Final. A bug in the message interpolation processor enables invalid EL expressions to be evaluated as if they were valid. This flaw allows attackers to bypass input sanitation (escaping, stripping) controls that developers may have put in place when handling user-controlled data in error messages.
***
Step up your Open Source Security Game with WhiteSource [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)",0,cve medium detected in hibernate validator final jar autoclosed cve medium severity vulnerability vulnerable library hibernate validator final jar hibernate s bean validation jsr reference implementation library home page a href path to dependency file blockdockserver build gradle path to vulnerable library root gradle caches modules files org hibernate hibernate validator final hibernate validator final jar dependency hierarchy jar root library grails datastore gorm release jar x hibernate validator final jar vulnerable library vulnerability details a flaw was found in hibernate validator version final a bug in the message interpolation processor enables invalid el expressions to be evaluated as if they were valid this flaw allows attackers to bypass input sanitation escaping stripping controls that developers may have put in place when handling user controlled data in error messages publish date url a href cvss score details base score metrics exploitability metrics attack vector network attack complexity low privileges required none user interaction none scope unchanged impact metrics confidentiality impact none integrity impact low availability impact none for more information on scores click a href suggested fix type upgrade version origin a href release date fix resolution org hibernate hibernate validator final final step up your open source security game with whitesource ,0
14384,17402348882.0,IssuesEvent,2021-08-02 21:44:58,googleapis/python-firestore,https://api.github.com/repos/googleapis/python-firestore,opened,Split out system tests into separate Kokoro job,type: process,"Working to reduce CI latency.Here are timings on my local machine (note the pre-run with `--install-only` to avoid measuring virtualenv creation time):
```bash
$ for job in $(nox --list | grep ""^\*"" | cut -d "" "" -f 2); do
echo $job;
nox -e $job --install-only;
time nox -re $job;
done
lint
nox > Running session lint
nox > Creating virtual environment (virtualenv) using python3.8 in .nox/lint
nox > python -m pip install flake8 black==19.10b0
nox > Skipping black run, as --install-only is set.
nox > Skipping flake8 run, as --install-only is set.
nox > Session lint was successful.
nox > Running session lint
nox > Re-using existing virtual environment at .nox/lint.
nox > python -m pip install flake8 black==19.10b0
nox > black --check docs google tests noxfile.py setup.py
All done! ✨ 🍰 ✨
109 files would be left unchanged.
nox > flake8 google tests
nox > Session lint was successful.
real 0m3.902s
user 0m16.218s
sys 0m0.277s
blacken
nox > Running session blacken
nox > Creating virtual environment (virtualenv) using python3.8 in .nox/blacken
nox > python -m pip install black==19.10b0
nox > Skipping black run, as --install-only is set.
nox > Session blacken was successful.
nox > Running session blacken
nox > Re-using existing virtual environment at .nox/blacken.
nox > python -m pip install black==19.10b0
nox > black docs google tests noxfile.py setup.py
All done! ✨ 🍰 ✨
109 files left unchanged.
nox > Session blacken was successful.
real 0m1.007s
user 0m0.884s
sys 0m0.127s
lint_setup_py
nox > Running session lint_setup_py
nox > Creating virtual environment (virtualenv) using python3.8 in .nox/lint_setup_py
nox > python -m pip install docutils pygments
nox > Skipping python run, as --install-only is set.
nox > Session lint_setup_py was successful.
nox > Running session lint_setup_py
nox > Re-using existing virtual environment at .nox/lint_setup_py.
nox > python -m pip install docutils pygments
nox > python setup.py check --restructuredtext --strict
running check
nox > Session lint_setup_py was successful.
real 0m1.067s
user 0m0.946s
sys 0m0.123s
unit-3.6
nox > Running session unit-3.6
nox > Creating virtual environment (virtualenv) using python3.6 in .nox/unit-3-6
nox > python -m pip install asyncmock pytest-asyncio -c /home/tseaver/projects/agendaless/Google/src/python-firestore/testing/constraints-3.6.txt
nox > python -m pip install mock pytest pytest-cov aiounittest -c /home/tseaver/projects/agendaless/Google/src/python-firestore/testing/constraints-3.6.txt
nox > python -m pip install -e . -c /home/tseaver/projects/agendaless/Google/src/python-firestore/testing/constraints-3.6.txt
nox > Skipping py.test run, as --install-only is set.
nox > Session unit-3.6 was successful.
nox > Running session unit-3.6
nox > Re-using existing virtual environment at .nox/unit-3-6.
nox > python -m pip install asyncmock pytest-asyncio -c /home/tseaver/projects/agendaless/Google/src/python-firestore/testing/constraints-3.6.txt
nox > python -m pip install mock pytest pytest-cov aiounittest -c /home/tseaver/projects/agendaless/Google/src/python-firestore/testing/constraints-3.6.txt
nox > python -m pip install -e . -c /home/tseaver/projects/agendaless/Google/src/python-firestore/testing/constraints-3.6.txt
nox > py.test --quiet --junitxml=unit_3.6_sponge_log.xml --cov=google/cloud --cov=tests/unit --cov-append --cov-config=.coveragerc --cov-report= --cov-fail-under=0 tests/unit
........................................................................ [ 5%]
...............................................................s..s.ss.. [ 10%]
........................................................................ [ 15%]
........................................................................ [ 20%]
...................................................s..s.ss.............. [ 25%]
........................................................................ [ 30%]
........................................................................ [ 35%]
........................................................................ [ 40%]
........................................................................ [ 45%]
........................................................................ [ 50%]
........................................................................ [ 55%]
........................................................................ [ 60%]
........................................................................ [ 65%]
........................................................................ [ 70%]
............................................................ssssssssssss [ 75%]
ssssssssssssssssssssssssssssssss........................................ [ 80%]
........................................................................ [ 85%]
........................................................................ [ 90%]
........................................................................ [ 95%]
........................................................... [100%]
- generated xml file: /home/tseaver/projects/agendaless/Google/src/python-firestore/unit_3.6_sponge_log.xml -
1375 passed, 52 skipped in 14.10s
nox > Session unit-3.6 was successful.
real 0m18.388s
user 0m17.654s
sys 0m0.675s
unit-3.7
nox > Running session unit-3.7
nox > Creating virtual environment (virtualenv) using python3.7 in .nox/unit-3-7
nox > python -m pip install asyncmock pytest-asyncio -c /home/tseaver/projects/agendaless/Google/src/python-firestore/testing/constraints-3.7.txt
nox > python -m pip install mock pytest pytest-cov aiounittest -c /home/tseaver/projects/agendaless/Google/src/python-firestore/testing/constraints-3.7.txt
nox > python -m pip install -e . -c /home/tseaver/projects/agendaless/Google/src/python-firestore/testing/constraints-3.7.txt
nox > Skipping py.test run, as --install-only is set.
nox > Session unit-3.7 was successful.
nox > Running session unit-3.7
nox > Re-using existing virtual environment at .nox/unit-3-7.
nox > python -m pip install asyncmock pytest-asyncio -c /home/tseaver/projects/agendaless/Google/src/python-firestore/testing/constraints-3.7.txt
nox > python -m pip install mock pytest pytest-cov aiounittest -c /home/tseaver/projects/agendaless/Google/src/python-firestore/testing/constraints-3.7.txt
nox > python -m pip install -e . -c /home/tseaver/projects/agendaless/Google/src/python-firestore/testing/constraints-3.7.txt
nox > py.test --quiet --junitxml=unit_3.7_sponge_log.xml --cov=google/cloud --cov=tests/unit --cov-append --cov-config=.coveragerc --cov-report= --cov-fail-under=0 tests/unit
........................................................................ [ 5%]
................................................................s..s..ss [ 10%]
........................................................................ [ 15%]
........................................................................ [ 20%]
....................................................s..s..ss............ [ 25%]
........................................................................ [ 30%]
........................................................................ [ 35%]
........................................................................ [ 40%]
........................................................................ [ 45%]
........................................................................ [ 50%]
........................................................................ [ 55%]
........................................................................ [ 60%]
........................................................................ [ 65%]
........................................................................ [ 70%]
............................................................ssssssssssss [ 75%]
ssssssssssssssssssssssssssssssss........................................ [ 80%]
........................................................................ [ 85%]
........................................................................ [ 90%]
........................................................................ [ 95%]
........................................................... [100%]
- generated xml file: /home/tseaver/projects/agendaless/Google/src/python-firestore/unit_3.7_sponge_log.xml -
1375 passed, 52 skipped in 14.09s
nox > Session unit-3.7 was successful.
real 0m17.930s
user 0m17.185s
sys 0m0.732s
unit-3.8
nox > Running session unit-3.8
nox > Creating virtual environment (virtualenv) using python3.8 in .nox/unit-3-8
nox > python -m pip install asyncmock pytest-asyncio -c /home/tseaver/projects/agendaless/Google/src/python-firestore/testing/constraints-3.8.txt
nox > python -m pip install mock pytest pytest-cov aiounittest -c /home/tseaver/projects/agendaless/Google/src/python-firestore/testing/constraints-3.8.txt
nox > python -m pip install -e . -c /home/tseaver/projects/agendaless/Google/src/python-firestore/testing/constraints-3.8.txt
nox > Skipping py.test run, as --install-only is set.
nox > Session unit-3.8 was successful.
nox > Running session unit-3.8
nox > Re-using existing virtual environment at .nox/unit-3-8.
nox > python -m pip install asyncmock pytest-asyncio -c /home/tseaver/projects/agendaless/Google/src/python-firestore/testing/constraints-3.8.txt
nox > python -m pip install mock pytest pytest-cov aiounittest -c /home/tseaver/projects/agendaless/Google/src/python-firestore/testing/constraints-3.8.txt
nox > python -m pip install -e . -c /home/tseaver/projects/agendaless/Google/src/python-firestore/testing/constraints-3.8.txt
nox > py.test --quiet --junitxml=unit_3.8_sponge_log.xml --cov=google/cloud --cov=tests/unit --cov-append --cov-config=.coveragerc --cov-report= --cov-fail-under=0 tests/unit
........................................................................ [ 5%]
................................................................s..s..ss [ 10%]
........................................................................ [ 15%]
........................................................................ [ 20%]
....................................................s..s..ss............ [ 25%]
........................................................................ [ 30%]
........................................................................ [ 35%]
........................................................................ [ 40%]
........................................................................ [ 45%]
........................................................................ [ 50%]
........................................................................ [ 55%]
........................................................................ [ 60%]
........................................................................ [ 65%]
........................................................................ [ 70%]
............................................................ssssssssssss [ 75%]
ssssssssssssssssssssssssssssssss........................................ [ 80%]
........................................................................ [ 85%]
........................................................................ [ 90%]
........................................................................ [ 95%]
........................................................... [100%]
- generated xml file: /home/tseaver/projects/agendaless/Google/src/python-firestore/unit_3.8_sponge_log.xml -
1375 passed, 52 skipped in 13.40s
nox > Session unit-3.8 was successful.
real 0m17.162s
user 0m16.517s
sys 0m0.638s
unit-3.9
nox > Running session unit-3.9
nox > Creating virtual environment (virtualenv) using python3.9 in .nox/unit-3-9
nox > python -m pip install asyncmock pytest-asyncio -c /home/tseaver/projects/agendaless/Google/src/python-firestore/testing/constraints-3.9.txt
nox > python -m pip install mock pytest pytest-cov aiounittest -c /home/tseaver/projects/agendaless/Google/src/python-firestore/testing/constraints-3.9.txt
nox > python -m pip install -e . -c /home/tseaver/projects/agendaless/Google/src/python-firestore/testing/constraints-3.9.txt
nox > Skipping py.test run, as --install-only is set.
nox > Session unit-3.9 was successful.
nox > Running session unit-3.9
nox > Re-using existing virtual environment at .nox/unit-3-9.
nox > python -m pip install asyncmock pytest-asyncio -c /home/tseaver/projects/agendaless/Google/src/python-firestore/testing/constraints-3.9.txt
nox > python -m pip install mock pytest pytest-cov aiounittest -c /home/tseaver/projects/agendaless/Google/src/python-firestore/testing/constraints-3.9.txt
nox > python -m pip install -e . -c /home/tseaver/projects/agendaless/Google/src/python-firestore/testing/constraints-3.9.txt
nox > py.test --quiet --junitxml=unit_3.9_sponge_log.xml --cov=google/cloud --cov=tests/unit --cov-append --cov-config=.coveragerc --cov-report= --cov-fail-under=0 tests/unit
........................................................................ [ 5%]
................................................................s..s..ss [ 10%]
........................................................................ [ 15%]
........................................................................ [ 20%]
....................................................s..s..ss............ [ 25%]
........................................................................ [ 30%]
........................................................................ [ 35%]
........................................................................ [ 40%]
........................................................................ [ 45%]
........................................................................ [ 50%]
........................................................................ [ 55%]
........................................................................ [ 60%]
........................................................................ [ 65%]
........................................................................ [ 70%]
............................................................ssssssssssss [ 75%]
ssssssssssssssssssssssssssssssss........................................ [ 80%]
........................................................................ [ 85%]
........................................................................ [ 90%]
........................................................................ [ 95%]
........................................................... [100%]
- generated xml file: /home/tseaver/projects/agendaless/Google/src/python-firestore/unit_3.9_sponge_log.xml -
1375 passed, 52 skipped in 15.70s
nox > Session unit-3.9 was successful.
real 0m19.250s
user 0m18.510s
sys 0m0.715s
system-3.7
nox > Running session system-3.7
nox > Creating virtual environment (virtualenv) using python3.7 in .nox/system-3-7
nox > python -m pip install --pre grpcio
nox > python -m pip install mock pytest google-cloud-testutils pytest-asyncio -c /home/tseaver/projects/agendaless/Google/src/python-firestore/testing/constraints-3.7.txt
nox > python -m pip install -e . -c /home/tseaver/projects/agendaless/Google/src/python-firestore/testing/constraints-3.7.txt
nox > Skipping py.test run, as --install-only is set.
nox > Session system-3.7 was successful.
nox > Running session system-3.7
nox > Re-using existing virtual environment at .nox/system-3-7.
nox > python -m pip install --pre grpcio
nox > python -m pip install mock pytest google-cloud-testutils pytest-asyncio -c /home/tseaver/projects/agendaless/Google/src/python-firestore/testing/constraints-3.7.txt
nox > python -m pip install -e . -c /home/tseaver/projects/agendaless/Google/src/python-firestore/testing/constraints-3.7.txt
nox > py.test --verbose --junitxml=system_3.7_sponge_log.xml tests/system
============================= test session starts ==============================
platform linux -- Python 3.7.6, pytest-6.2.4, py-1.10.0, pluggy-0.13.1 -- /home/tseaver/projects/agendaless/Google/src/python-firestore/.nox/system-3-7/bin/python
cachedir: .pytest_cache
rootdir: /home/tseaver/projects/agendaless/Google/src/python-firestore
plugins: asyncio-0.15.1
collected 77 items
tests/system/test_system.py::test_collections PASSED [ 1%]
tests/system/test_system.py::test_collections_w_import PASSED [ 2%]
tests/system/test_system.py::test_create_document PASSED [ 3%]
tests/system/test_system.py::test_create_document_w_subcollection PASSED [ 5%]
tests/system/test_system.py::test_cannot_use_foreign_key PASSED [ 6%]
tests/system/test_system.py::test_no_document PASSED [ 7%]
tests/system/test_system.py::test_document_set PASSED [ 9%]
tests/system/test_system.py::test_document_integer_field PASSED [ 10%]
tests/system/test_system.py::test_document_set_merge PASSED [ 11%]
tests/system/test_system.py::test_document_set_w_int_field PASSED [ 12%]
tests/system/test_system.py::test_document_update_w_int_field PASSED [ 14%]
tests/system/test_system.py::test_update_document PASSED [ 15%]
tests/system/test_system.py::test_document_get PASSED [ 16%]
tests/system/test_system.py::test_document_delete PASSED [ 18%]
tests/system/test_system.py::test_collection_add PASSED [ 19%]
tests/system/test_system.py::test_query_stream_w_simple_field_eq_op PASSED [ 20%]
tests/system/test_system.py::test_query_stream_w_simple_field_array_contains_op PASSED [ 22%]
tests/system/test_system.py::test_query_stream_w_simple_field_in_op PASSED [ 23%]
tests/system/test_system.py::test_query_stream_w_not_eq_op PASSED [ 24%]
tests/system/test_system.py::test_query_stream_w_simple_not_in_op PASSED [ 25%]
tests/system/test_system.py::test_query_stream_w_simple_field_array_contains_any_op PASSED [ 27%]
tests/system/test_system.py::test_query_stream_w_order_by PASSED [ 28%]
tests/system/test_system.py::test_query_stream_w_field_path PASSED [ 29%]
tests/system/test_system.py::test_query_stream_w_start_end_cursor PASSED [ 31%]
tests/system/test_system.py::test_query_stream_wo_results PASSED [ 32%]
tests/system/test_system.py::test_query_stream_w_projection PASSED [ 33%]
tests/system/test_system.py::test_query_stream_w_multiple_filters PASSED [ 35%]
tests/system/test_system.py::test_query_stream_w_offset PASSED [ 36%]
tests/system/test_system.py::test_query_with_order_dot_key PASSED [ 37%]
tests/system/test_system.py::test_query_unary PASSED [ 38%]
tests/system/test_system.py::test_collection_group_queries PASSED [ 40%]
tests/system/test_system.py::test_collection_group_queries_startat_endat PASSED [ 41%]
tests/system/test_system.py::test_collection_group_queries_filters PASSED [ 42%]
tests/system/test_system.py::test_partition_query_no_partitions PASSED [ 44%]
tests/system/test_system.py::test_partition_query PASSED [ 45%]
tests/system/test_system.py::test_get_all PASSED [ 46%]
tests/system/test_system.py::test_batch PASSED [ 48%]
tests/system/test_system.py::test_watch_document PASSED [ 49%]
tests/system/test_system.py::test_watch_collection PASSED [ 50%]
tests/system/test_system.py::test_watch_query PASSED [ 51%]
tests/system/test_system.py::test_array_union PASSED [ 53%]
tests/system/test_system.py::test_watch_query_order PASSED [ 54%]
tests/system/test_system_async.py::test_collections PASSED [ 55%]
tests/system/test_system_async.py::test_collections_w_import PASSED [ 57%]
tests/system/test_system_async.py::test_create_document PASSED [ 58%]
tests/system/test_system_async.py::test_create_document_w_subcollection PASSED [ 59%]
tests/system/test_system_async.py::test_cannot_use_foreign_key PASSED [ 61%]
tests/system/test_system_async.py::test_no_document PASSED [ 62%]
tests/system/test_system_async.py::test_document_set PASSED [ 63%]
tests/system/test_system_async.py::test_document_integer_field PASSED [ 64%]
tests/system/test_system_async.py::test_document_set_merge PASSED [ 66%]
tests/system/test_system_async.py::test_document_set_w_int_field PASSED [ 67%]
tests/system/test_system_async.py::test_document_update_w_int_field PASSED [ 68%]
tests/system/test_system_async.py::test_update_document PASSED [ 70%]
tests/system/test_system_async.py::test_document_get PASSED [ 71%]
tests/system/test_system_async.py::test_document_delete PASSED [ 72%]
tests/system/test_system_async.py::test_collection_add PASSED [ 74%]
tests/system/test_system_async.py::test_query_stream_w_simple_field_eq_op PASSED [ 75%]
tests/system/test_system_async.py::test_query_stream_w_simple_field_array_contains_op PASSED [ 76%]
tests/system/test_system_async.py::test_query_stream_w_simple_field_in_op PASSED [ 77%]
tests/system/test_system_async.py::test_query_stream_w_simple_field_array_contains_any_op PASSED [ 79%]
tests/system/test_system_async.py::test_query_stream_w_order_by PASSED [ 80%]
tests/system/test_system_async.py::test_query_stream_w_field_path PASSED [ 81%]
tests/system/test_system_async.py::test_query_stream_w_start_end_cursor PASSED [ 83%]
tests/system/test_system_async.py::test_query_stream_wo_results PASSED [ 84%]
tests/system/test_system_async.py::test_query_stream_w_projection PASSED [ 85%]
tests/system/test_system_async.py::test_query_stream_w_multiple_filters PASSED [ 87%]
tests/system/test_system_async.py::test_query_stream_w_offset PASSED [ 88%]
tests/system/test_system_async.py::test_query_with_order_dot_key PASSED [ 89%]
tests/system/test_system_async.py::test_query_unary PASSED [ 90%]
tests/system/test_system_async.py::test_collection_group_queries PASSED [ 92%]
tests/system/test_system_async.py::test_collection_group_queries_startat_endat PASSED [ 93%]
tests/system/test_system_async.py::test_collection_group_queries_filters PASSED [ 94%]
tests/system/test_system_async.py::test_partition_query_no_partitions PASSED [ 96%]
tests/system/test_system_async.py::test_partition_query PASSED [ 97%]
tests/system/test_system_async.py::test_get_all PASSED [ 98%]
tests/system/test_system_async.py::test_batch PASSED [100%]
=================== 77 passed in 211.00s (0:03:31) ===================
nox > Command py.test --verbose --junitxml=system_3.7_sponge_log.xml tests/system passed
nox > Session system-3.7 was successful.
real 3m34.561s
user 0m11.371s
sys 0m1.881s
cover
nox > Running session cover
nox > Creating virtual environment (virtualenv) using python3.8 in .nox/cover
nox > python -m pip install coverage pytest-cov
nox > Skipping coverage run, as --install-only is set.
nox > Skipping coverage run, as --install-only is set.
nox > Session cover was successful.
nox > Running session cover
nox > Re-using existing virtual environment at .nox/cover.
nox > python -m pip install coverage pytest-cov
nox > coverage report --show-missing --fail-under=100
Name Stmts Miss Branch BrPart Cover Missing
---------------------------------------------------------------------------------------------------------------------------------
google/cloud/firestore.py 35 0 0 0 100%
google/cloud/firestore_admin_v1/__init__.py 23 0 0 0 100%
google/cloud/firestore_admin_v1/services/__init__.py 0 0 0 0 100%
google/cloud/firestore_admin_v1/services/firestore_admin/__init__.py 3 0 0 0 100%
google/cloud/firestore_admin_v1/services/firestore_admin/async_client.py 168 0 38 0 100%
google/cloud/firestore_admin_v1/services/firestore_admin/client.py 282 0 90 0 100%
google/cloud/firestore_admin_v1/services/firestore_admin/pagers.py 82 0 20 0 100%
google/cloud/firestore_admin_v1/services/firestore_admin/transports/__init__.py 9 0 0 0 100%
google/cloud/firestore_admin_v1/services/firestore_admin/transports/base.py 72 0 12 0 100%
google/cloud/firestore_admin_v1/services/firestore_admin/transports/grpc.py 100 0 34 0 100%
google/cloud/firestore_admin_v1/services/firestore_admin/transports/grpc_asyncio.py 103 0 34 0 100%
google/cloud/firestore_admin_v1/types/__init__.py 6 0 0 0 100%
google/cloud/firestore_admin_v1/types/field.py 12 0 0 0 100%
google/cloud/firestore_admin_v1/types/firestore_admin.py 48 0 0 0 100%
google/cloud/firestore_admin_v1/types/index.py 28 0 0 0 100%
google/cloud/firestore_admin_v1/types/location.py 4 0 0 0 100%
google/cloud/firestore_admin_v1/types/operation.py 57 0 0 0 100%
google/cloud/firestore_bundle/__init__.py 7 0 0 0 100%
google/cloud/firestore_bundle/_helpers.py 4 0 0 0 100%
google/cloud/firestore_bundle/bundle.py 111 0 32 0 100%
google/cloud/firestore_bundle/services/__init__.py 0 0 0 0 100%
google/cloud/firestore_bundle/types/__init__.py 2 0 0 0 100%
google/cloud/firestore_bundle/types/bundle.py 33 0 0 0 100%
google/cloud/firestore_v1/__init__.py 37 0 0 0 100%
google/cloud/firestore_v1/_helpers.py 478 0 240 0 100%
google/cloud/firestore_v1/async_batch.py 19 0 2 0 100%
google/cloud/firestore_v1/async_client.py 41 0 4 0 100%
google/cloud/firestore_v1/async_collection.py 32 0 4 0 100%
google/cloud/firestore_v1/async_document.py 44 0 4 0 100%
google/cloud/firestore_v1/async_query.py 45 0 16 0 100%
google/cloud/firestore_v1/async_transaction.py 98 0 22 0 100%
google/cloud/firestore_v1/base_batch.py 32 0 4 0 100%
google/cloud/firestore_v1/base_client.py 151 0 42 0 100%
google/cloud/firestore_v1/base_collection.py 101 0 16 0 100%
google/cloud/firestore_v1/base_document.py 145 0 24 0 100%
google/cloud/firestore_v1/base_query.py 331 0 130 0 100%
google/cloud/firestore_v1/base_transaction.py 65 0 6 0 100%
google/cloud/firestore_v1/batch.py 19 0 2 0 100%
google/cloud/firestore_v1/client.py 42 0 4 0 100%
google/cloud/firestore_v1/collection.py 30 0 2 0 100%
google/cloud/firestore_v1/document.py 48 0 4 0 100%
google/cloud/firestore_v1/field_path.py 135 0 56 0 100%
google/cloud/firestore_v1/order.py 130 0 54 0 100%
google/cloud/firestore_v1/query.py 47 0 14 0 100%
google/cloud/firestore_v1/services/__init__.py 0 0 0 0 100%
google/cloud/firestore_v1/services/firestore/__init__.py 3 0 0 0 100%
google/cloud/firestore_v1/services/firestore/async_client.py 178 0 30 0 100%
google/cloud/firestore_v1/services/firestore/client.py 276 0 90 0 100%
google/cloud/firestore_v1/services/firestore/pagers.py 121 0 30 0 100%
google/cloud/firestore_v1/services/firestore/transports/__init__.py 9 0 0 0 100%
google/cloud/firestore_v1/services/firestore/transports/base.py 80 0 12 0 100%
google/cloud/firestore_v1/services/firestore/transports/grpc.py 122 0 44 0 100%
google/cloud/firestore_v1/services/firestore/transports/grpc_asyncio.py 125 0 44 0 100%
google/cloud/firestore_v1/transaction.py 97 0 22 0 100%
google/cloud/firestore_v1/transforms.py 39 0 10 0 100%
google/cloud/firestore_v1/types/__init__.py 6 0 0 0 100%
google/cloud/firestore_v1/types/common.py 16 0 0 0 100%
google/cloud/firestore_v1/types/document.py 27 0 0 0 100%
google/cloud/firestore_v1/types/firestore.py 157 0 0 0 100%
google/cloud/firestore_v1/types/query.py 66 0 0 0 100%
google/cloud/firestore_v1/types/write.py 45 0 0 0 100%
google/cloud/firestore_v1/watch.py 325 0 78 0 100%
tests/unit/__init__.py 0 0 0 0 100%
tests/unit/test_firestore_shim.py 10 0 2 0 100%
tests/unit/v1/__init__.py 0 0 0 0 100%
tests/unit/v1/_test_helpers.py 22 0 0 0 100%
tests/unit/v1/conformance_tests.py 106 0 0 0 100%
tests/unit/v1/test__helpers.py 1653 0 36 0 100%
tests/unit/v1/test_async_batch.py 98 0 0 0 100%
tests/unit/v1/test_async_client.py 267 0 18 0 100%
tests/unit/v1/test_async_collection.py 223 0 20 0 100%
tests/unit/v1/test_async_document.py 334 0 32 0 100%
tests/unit/v1/test_async_query.py 327 0 26 0 100%
tests/unit/v1/test_async_transaction.py 584 0 0 0 100%
tests/unit/v1/test_base_batch.py 98 0 0 0 100%
tests/unit/v1/test_base_client.py 238 0 0 0 100%
tests/unit/v1/test_base_collection.py 239 0 0 0 100%
tests/unit/v1/test_base_document.py 293 0 2 0 100%
tests/unit/v1/test_base_query.py 1006 0 20 0 100%
tests/unit/v1/test_base_transaction.py 75 0 0 0 100%
tests/unit/v1/test_batch.py 92 0 0 0 100%
tests/unit/v1/test_bundle.py 268 0 4 0 100%
tests/unit/v1/test_client.py 256 0 12 0 100%
tests/unit/v1/test_collection.py 197 0 10 0 100%
tests/unit/v1/test_cross_language.py 207 0 82 0 100%
tests/unit/v1/test_document.py 307 0 26 0 100%
tests/unit/v1/test_field_path.py 355 0 8 0 100%
tests/unit/v1/test_order.py 138 0 8 0 100%
tests/unit/v1/test_query.py 318 0 0 0 100%
tests/unit/v1/test_transaction.py 560 0 0 0 100%
tests/unit/v1/test_transforms.py 78 0 8 0 100%
tests/unit/v1/test_watch.py 667 0 4 0 100%
---------------------------------------------------------------------------------------------------------------------------------
TOTAL 13967 0 1588 0 100%
nox > coverage erase
nox > Session cover was successful.
real 0m3.581s
user 0m3.419s
sys 0m0.163s
docs
nox > Running session docs
nox > Creating virtual environment (virtualenv) using python3.8 in .nox/docs
nox > python -m pip install -e .
nox > python -m pip install sphinx==4.0.1 alabaster recommonmark
nox > Skipping sphinx-build run, as --install-only is set.
nox > Session docs was successful.
nox > Running session docs
nox > Re-using existing virtual environment at .nox/docs.
nox > python -m pip install -e .
nox > python -m pip install sphinx==4.0.1 alabaster recommonmark
nox > sphinx-build -W -T -N -b html -d docs/_build/doctrees/ docs/ docs/_build/html/
Running Sphinx v4.0.1
making output directory... done
[autosummary] generating autosummary for: README.rst, UPGRADING.md, admin_client.rst, batch.rst, changelog.md, client.rst, collection.rst, document.rst, field_path.rst, index.rst, multiprocessing.rst, query.rst, transaction.rst, transforms.rst, types.rst
loading intersphinx inventory from https://python.readthedocs.org/en/latest/objects.inv...
loading intersphinx inventory from https://googleapis.dev/python/google-auth/latest/objects.inv...
loading intersphinx inventory from https://googleapis.dev/python/google-api-core/latest/objects.inv...
loading intersphinx inventory from https://grpc.github.io/grpc/python/objects.inv...
loading intersphinx inventory from https://proto-plus-python.readthedocs.io/en/latest/objects.inv...
loading intersphinx inventory from https://googleapis.dev/python/protobuf/latest/objects.inv...
intersphinx inventory has moved: https://python.readthedocs.org/en/latest/objects.inv -> https://python.readthedocs.io/en/latest/objects.inv
building [mo]: targets for 0 po files that are out of date
building [html]: targets for 15 source files that are out of date
updating environment: [new config] 15 added, 0 changed, 0 removed
reading sources... [ 6%] README
reading sources... [ 13%] UPGRADING
/home/tseaver/projects/agendaless/Google/src/python-firestore/.nox/docs/lib/python3.8/site-packages/recommonmark/parser.py:75: UserWarning: Container node skipped: type=document
warn(""Container node skipped: type={0}"".format(mdnode.t))
reading sources... [ 20%] admin_client
reading sources... [ 26%] batch
reading sources... [ 33%] changelog
/home/tseaver/projects/agendaless/Google/src/python-firestore/.nox/docs/lib/python3.8/site-packages/recommonmark/parser.py:75: UserWarning: Container node skipped: type=document
warn(""Container node skipped: type={0}"".format(mdnode.t))
reading sources... [ 40%] client
reading sources... [ 46%] collection
reading sources... [ 53%] document
reading sources... [ 60%] field_path
reading sources... [ 66%] index
reading sources... [ 73%] multiprocessing
reading sources... [ 80%] query
reading sources... [ 86%] transaction
reading sources... [ 93%] transforms
reading sources... [100%] types
looking for now-outdated files... none found
pickling environment... done
checking consistency... done
preparing documents... done
writing output... [ 6%] README
writing output... [ 13%] UPGRADING
writing output... [ 20%] admin_client
writing output... [ 26%] batch
writing output... [ 33%] changelog
writing output... [ 40%] client
writing output... [ 46%] collection
writing output... [ 53%] document
writing output... [ 60%] field_path
writing output... [ 66%] index
writing output... [ 73%] multiprocessing
writing output... [ 80%] query
writing output... [ 86%] transaction
writing output... [ 93%] transforms
writing output... [100%] types
generating indices... genindex py-modindex done
highlighting module code... [ 3%] google.cloud.firestore_admin_v1.services.firestore_admin.client
highlighting module code... [ 7%] google.cloud.firestore_v1.async_batch
highlighting module code... [ 11%] google.cloud.firestore_v1.async_client
highlighting module code... [ 15%] google.cloud.firestore_v1.async_collection
highlighting module code... [ 19%] google.cloud.firestore_v1.async_document
highlighting module code... [ 23%] google.cloud.firestore_v1.async_query
highlighting module code... [ 26%] google.cloud.firestore_v1.async_transaction
highlighting module code... [ 30%] google.cloud.firestore_v1.base_batch
highlighting module code... [ 34%] google.cloud.firestore_v1.base_client
highlighting module code... [ 38%] google.cloud.firestore_v1.base_collection
highlighting module code... [ 42%] google.cloud.firestore_v1.base_document
highlighting module code... [ 46%] google.cloud.firestore_v1.base_query
highlighting module code... [ 50%] google.cloud.firestore_v1.base_transaction
highlighting module code... [ 53%] google.cloud.firestore_v1.batch
highlighting module code... [ 57%] google.cloud.firestore_v1.client
highlighting module code... [ 61%] google.cloud.firestore_v1.collection
highlighting module code... [ 65%] google.cloud.firestore_v1.document
highlighting module code... [ 69%] google.cloud.firestore_v1.field_path
highlighting module code... [ 73%] google.cloud.firestore_v1.query
highlighting module code... [ 76%] google.cloud.firestore_v1.transaction
highlighting module code... [ 80%] google.cloud.firestore_v1.transforms
highlighting module code... [ 84%] google.cloud.firestore_v1.types.common
highlighting module code... [ 88%] google.cloud.firestore_v1.types.document
highlighting module code... [ 92%] google.cloud.firestore_v1.types.firestore
highlighting module code... [ 96%] google.cloud.firestore_v1.types.query
highlighting module code... [100%] google.cloud.firestore_v1.types.write
writing additional pages... search done
copying static files... done
copying extra files... done
dumping search index in English (code: en)... done
dumping object inventory... done
build succeeded.
The HTML pages are in docs/_build/html.
nox > Session docs was successful.
real 0m12.548s
user 0m12.024s
sys 0m0.354s
```
Given that the system tests take 3 - 4 minutes to run, ISTM it would be good to break them out into a separate Kokoro job, running in parallel with the other test.
This change will require updates to the google3 internal configuration for Kokoro, similar to those @tswast made to enable them for googleapis/python-bigtable#390.",1.0,"Split out system tests into separate Kokoro job - Working to reduce CI latency.Here are timings on my local machine (note the pre-run with `--install-only` to avoid measuring virtualenv creation time):
```bash
$ for job in $(nox --list | grep ""^\*"" | cut -d "" "" -f 2); do
echo $job;
nox -e $job --install-only;
time nox -re $job;
done
lint
nox > Running session lint
nox > Creating virtual environment (virtualenv) using python3.8 in .nox/lint
nox > python -m pip install flake8 black==19.10b0
nox > Skipping black run, as --install-only is set.
nox > Skipping flake8 run, as --install-only is set.
nox > Session lint was successful.
nox > Running session lint
nox > Re-using existing virtual environment at .nox/lint.
nox > python -m pip install flake8 black==19.10b0
nox > black --check docs google tests noxfile.py setup.py
All done! ✨ 🍰 ✨
109 files would be left unchanged.
nox > flake8 google tests
nox > Session lint was successful.
real 0m3.902s
user 0m16.218s
sys 0m0.277s
blacken
nox > Running session blacken
nox > Creating virtual environment (virtualenv) using python3.8 in .nox/blacken
nox > python -m pip install black==19.10b0
nox > Skipping black run, as --install-only is set.
nox > Session blacken was successful.
nox > Running session blacken
nox > Re-using existing virtual environment at .nox/blacken.
nox > python -m pip install black==19.10b0
nox > black docs google tests noxfile.py setup.py
All done! ✨ 🍰 ✨
109 files left unchanged.
nox > Session blacken was successful.
real 0m1.007s
user 0m0.884s
sys 0m0.127s
lint_setup_py
nox > Running session lint_setup_py
nox > Creating virtual environment (virtualenv) using python3.8 in .nox/lint_setup_py
nox > python -m pip install docutils pygments
nox > Skipping python run, as --install-only is set.
nox > Session lint_setup_py was successful.
nox > Running session lint_setup_py
nox > Re-using existing virtual environment at .nox/lint_setup_py.
nox > python -m pip install docutils pygments
nox > python setup.py check --restructuredtext --strict
running check
nox > Session lint_setup_py was successful.
real 0m1.067s
user 0m0.946s
sys 0m0.123s
unit-3.6
nox > Running session unit-3.6
nox > Creating virtual environment (virtualenv) using python3.6 in .nox/unit-3-6
nox > python -m pip install asyncmock pytest-asyncio -c /home/tseaver/projects/agendaless/Google/src/python-firestore/testing/constraints-3.6.txt
nox > python -m pip install mock pytest pytest-cov aiounittest -c /home/tseaver/projects/agendaless/Google/src/python-firestore/testing/constraints-3.6.txt
nox > python -m pip install -e . -c /home/tseaver/projects/agendaless/Google/src/python-firestore/testing/constraints-3.6.txt
nox > Skipping py.test run, as --install-only is set.
nox > Session unit-3.6 was successful.
nox > Running session unit-3.6
nox > Re-using existing virtual environment at .nox/unit-3-6.
nox > python -m pip install asyncmock pytest-asyncio -c /home/tseaver/projects/agendaless/Google/src/python-firestore/testing/constraints-3.6.txt
nox > python -m pip install mock pytest pytest-cov aiounittest -c /home/tseaver/projects/agendaless/Google/src/python-firestore/testing/constraints-3.6.txt
nox > python -m pip install -e . -c /home/tseaver/projects/agendaless/Google/src/python-firestore/testing/constraints-3.6.txt
nox > py.test --quiet --junitxml=unit_3.6_sponge_log.xml --cov=google/cloud --cov=tests/unit --cov-append --cov-config=.coveragerc --cov-report= --cov-fail-under=0 tests/unit
........................................................................ [ 5%]
...............................................................s..s.ss.. [ 10%]
........................................................................ [ 15%]
........................................................................ [ 20%]
...................................................s..s.ss.............. [ 25%]
........................................................................ [ 30%]
........................................................................ [ 35%]
........................................................................ [ 40%]
........................................................................ [ 45%]
........................................................................ [ 50%]
........................................................................ [ 55%]
........................................................................ [ 60%]
........................................................................ [ 65%]
........................................................................ [ 70%]
............................................................ssssssssssss [ 75%]
ssssssssssssssssssssssssssssssss........................................ [ 80%]
........................................................................ [ 85%]
........................................................................ [ 90%]
........................................................................ [ 95%]
........................................................... [100%]
- generated xml file: /home/tseaver/projects/agendaless/Google/src/python-firestore/unit_3.6_sponge_log.xml -
1375 passed, 52 skipped in 14.10s
nox > Session unit-3.6 was successful.
real 0m18.388s
user 0m17.654s
sys 0m0.675s
unit-3.7
nox > Running session unit-3.7
nox > Creating virtual environment (virtualenv) using python3.7 in .nox/unit-3-7
nox > python -m pip install asyncmock pytest-asyncio -c /home/tseaver/projects/agendaless/Google/src/python-firestore/testing/constraints-3.7.txt
nox > python -m pip install mock pytest pytest-cov aiounittest -c /home/tseaver/projects/agendaless/Google/src/python-firestore/testing/constraints-3.7.txt
nox > python -m pip install -e . -c /home/tseaver/projects/agendaless/Google/src/python-firestore/testing/constraints-3.7.txt
nox > Skipping py.test run, as --install-only is set.
nox > Session unit-3.7 was successful.
nox > Running session unit-3.7
nox > Re-using existing virtual environment at .nox/unit-3-7.
nox > python -m pip install asyncmock pytest-asyncio -c /home/tseaver/projects/agendaless/Google/src/python-firestore/testing/constraints-3.7.txt
nox > python -m pip install mock pytest pytest-cov aiounittest -c /home/tseaver/projects/agendaless/Google/src/python-firestore/testing/constraints-3.7.txt
nox > python -m pip install -e . -c /home/tseaver/projects/agendaless/Google/src/python-firestore/testing/constraints-3.7.txt
nox > py.test --quiet --junitxml=unit_3.7_sponge_log.xml --cov=google/cloud --cov=tests/unit --cov-append --cov-config=.coveragerc --cov-report= --cov-fail-under=0 tests/unit
........................................................................ [ 5%]
................................................................s..s..ss [ 10%]
........................................................................ [ 15%]
........................................................................ [ 20%]
....................................................s..s..ss............ [ 25%]
........................................................................ [ 30%]
........................................................................ [ 35%]
........................................................................ [ 40%]
........................................................................ [ 45%]
........................................................................ [ 50%]
........................................................................ [ 55%]
........................................................................ [ 60%]
........................................................................ [ 65%]
........................................................................ [ 70%]
............................................................ssssssssssss [ 75%]
ssssssssssssssssssssssssssssssss........................................ [ 80%]
........................................................................ [ 85%]
........................................................................ [ 90%]
........................................................................ [ 95%]
........................................................... [100%]
- generated xml file: /home/tseaver/projects/agendaless/Google/src/python-firestore/unit_3.7_sponge_log.xml -
1375 passed, 52 skipped in 14.09s
nox > Session unit-3.7 was successful.
real 0m17.930s
user 0m17.185s
sys 0m0.732s
unit-3.8
nox > Running session unit-3.8
nox > Creating virtual environment (virtualenv) using python3.8 in .nox/unit-3-8
nox > python -m pip install asyncmock pytest-asyncio -c /home/tseaver/projects/agendaless/Google/src/python-firestore/testing/constraints-3.8.txt
nox > python -m pip install mock pytest pytest-cov aiounittest -c /home/tseaver/projects/agendaless/Google/src/python-firestore/testing/constraints-3.8.txt
nox > python -m pip install -e . -c /home/tseaver/projects/agendaless/Google/src/python-firestore/testing/constraints-3.8.txt
nox > Skipping py.test run, as --install-only is set.
nox > Session unit-3.8 was successful.
nox > Running session unit-3.8
nox > Re-using existing virtual environment at .nox/unit-3-8.
nox > python -m pip install asyncmock pytest-asyncio -c /home/tseaver/projects/agendaless/Google/src/python-firestore/testing/constraints-3.8.txt
nox > python -m pip install mock pytest pytest-cov aiounittest -c /home/tseaver/projects/agendaless/Google/src/python-firestore/testing/constraints-3.8.txt
nox > python -m pip install -e . -c /home/tseaver/projects/agendaless/Google/src/python-firestore/testing/constraints-3.8.txt
nox > py.test --quiet --junitxml=unit_3.8_sponge_log.xml --cov=google/cloud --cov=tests/unit --cov-append --cov-config=.coveragerc --cov-report= --cov-fail-under=0 tests/unit
........................................................................ [ 5%]
................................................................s..s..ss [ 10%]
........................................................................ [ 15%]
........................................................................ [ 20%]
....................................................s..s..ss............ [ 25%]
........................................................................ [ 30%]
........................................................................ [ 35%]
........................................................................ [ 40%]
........................................................................ [ 45%]
........................................................................ [ 50%]
........................................................................ [ 55%]
........................................................................ [ 60%]
........................................................................ [ 65%]
........................................................................ [ 70%]
............................................................ssssssssssss [ 75%]
ssssssssssssssssssssssssssssssss........................................ [ 80%]
........................................................................ [ 85%]
........................................................................ [ 90%]
........................................................................ [ 95%]
........................................................... [100%]
- generated xml file: /home/tseaver/projects/agendaless/Google/src/python-firestore/unit_3.8_sponge_log.xml -
1375 passed, 52 skipped in 13.40s
nox > Session unit-3.8 was successful.
real 0m17.162s
user 0m16.517s
sys 0m0.638s
unit-3.9
nox > Running session unit-3.9
nox > Creating virtual environment (virtualenv) using python3.9 in .nox/unit-3-9
nox > python -m pip install asyncmock pytest-asyncio -c /home/tseaver/projects/agendaless/Google/src/python-firestore/testing/constraints-3.9.txt
nox > python -m pip install mock pytest pytest-cov aiounittest -c /home/tseaver/projects/agendaless/Google/src/python-firestore/testing/constraints-3.9.txt
nox > python -m pip install -e . -c /home/tseaver/projects/agendaless/Google/src/python-firestore/testing/constraints-3.9.txt
nox > Skipping py.test run, as --install-only is set.
nox > Session unit-3.9 was successful.
nox > Running session unit-3.9
nox > Re-using existing virtual environment at .nox/unit-3-9.
nox > python -m pip install asyncmock pytest-asyncio -c /home/tseaver/projects/agendaless/Google/src/python-firestore/testing/constraints-3.9.txt
nox > python -m pip install mock pytest pytest-cov aiounittest -c /home/tseaver/projects/agendaless/Google/src/python-firestore/testing/constraints-3.9.txt
nox > python -m pip install -e . -c /home/tseaver/projects/agendaless/Google/src/python-firestore/testing/constraints-3.9.txt
nox > py.test --quiet --junitxml=unit_3.9_sponge_log.xml --cov=google/cloud --cov=tests/unit --cov-append --cov-config=.coveragerc --cov-report= --cov-fail-under=0 tests/unit
........................................................................ [ 5%]
................................................................s..s..ss [ 10%]
........................................................................ [ 15%]
........................................................................ [ 20%]
....................................................s..s..ss............ [ 25%]
........................................................................ [ 30%]
........................................................................ [ 35%]
........................................................................ [ 40%]
........................................................................ [ 45%]
........................................................................ [ 50%]
........................................................................ [ 55%]
........................................................................ [ 60%]
........................................................................ [ 65%]
........................................................................ [ 70%]
............................................................ssssssssssss [ 75%]
ssssssssssssssssssssssssssssssss........................................ [ 80%]
........................................................................ [ 85%]
........................................................................ [ 90%]
........................................................................ [ 95%]
........................................................... [100%]
- generated xml file: /home/tseaver/projects/agendaless/Google/src/python-firestore/unit_3.9_sponge_log.xml -
1375 passed, 52 skipped in 15.70s
nox > Session unit-3.9 was successful.
real 0m19.250s
user 0m18.510s
sys 0m0.715s
system-3.7
nox > Running session system-3.7
nox > Creating virtual environment (virtualenv) using python3.7 in .nox/system-3-7
nox > python -m pip install --pre grpcio
nox > python -m pip install mock pytest google-cloud-testutils pytest-asyncio -c /home/tseaver/projects/agendaless/Google/src/python-firestore/testing/constraints-3.7.txt
nox > python -m pip install -e . -c /home/tseaver/projects/agendaless/Google/src/python-firestore/testing/constraints-3.7.txt
nox > Skipping py.test run, as --install-only is set.
nox > Session system-3.7 was successful.
nox > Running session system-3.7
nox > Re-using existing virtual environment at .nox/system-3-7.
nox > python -m pip install --pre grpcio
nox > python -m pip install mock pytest google-cloud-testutils pytest-asyncio -c /home/tseaver/projects/agendaless/Google/src/python-firestore/testing/constraints-3.7.txt
nox > python -m pip install -e . -c /home/tseaver/projects/agendaless/Google/src/python-firestore/testing/constraints-3.7.txt
nox > py.test --verbose --junitxml=system_3.7_sponge_log.xml tests/system
============================= test session starts ==============================
platform linux -- Python 3.7.6, pytest-6.2.4, py-1.10.0, pluggy-0.13.1 -- /home/tseaver/projects/agendaless/Google/src/python-firestore/.nox/system-3-7/bin/python
cachedir: .pytest_cache
rootdir: /home/tseaver/projects/agendaless/Google/src/python-firestore
plugins: asyncio-0.15.1
collected 77 items
tests/system/test_system.py::test_collections PASSED [ 1%]
tests/system/test_system.py::test_collections_w_import PASSED [ 2%]
tests/system/test_system.py::test_create_document PASSED [ 3%]
tests/system/test_system.py::test_create_document_w_subcollection PASSED [ 5%]
tests/system/test_system.py::test_cannot_use_foreign_key PASSED [ 6%]
tests/system/test_system.py::test_no_document PASSED [ 7%]
tests/system/test_system.py::test_document_set PASSED [ 9%]
tests/system/test_system.py::test_document_integer_field PASSED [ 10%]
tests/system/test_system.py::test_document_set_merge PASSED [ 11%]
tests/system/test_system.py::test_document_set_w_int_field PASSED [ 12%]
tests/system/test_system.py::test_document_update_w_int_field PASSED [ 14%]
tests/system/test_system.py::test_update_document PASSED [ 15%]
tests/system/test_system.py::test_document_get PASSED [ 16%]
tests/system/test_system.py::test_document_delete PASSED [ 18%]
tests/system/test_system.py::test_collection_add PASSED [ 19%]
tests/system/test_system.py::test_query_stream_w_simple_field_eq_op PASSED [ 20%]
tests/system/test_system.py::test_query_stream_w_simple_field_array_contains_op PASSED [ 22%]
tests/system/test_system.py::test_query_stream_w_simple_field_in_op PASSED [ 23%]
tests/system/test_system.py::test_query_stream_w_not_eq_op PASSED [ 24%]
tests/system/test_system.py::test_query_stream_w_simple_not_in_op PASSED [ 25%]
tests/system/test_system.py::test_query_stream_w_simple_field_array_contains_any_op PASSED [ 27%]
tests/system/test_system.py::test_query_stream_w_order_by PASSED [ 28%]
tests/system/test_system.py::test_query_stream_w_field_path PASSED [ 29%]
tests/system/test_system.py::test_query_stream_w_start_end_cursor PASSED [ 31%]
tests/system/test_system.py::test_query_stream_wo_results PASSED [ 32%]
tests/system/test_system.py::test_query_stream_w_projection PASSED [ 33%]
tests/system/test_system.py::test_query_stream_w_multiple_filters PASSED [ 35%]
tests/system/test_system.py::test_query_stream_w_offset PASSED [ 36%]
tests/system/test_system.py::test_query_with_order_dot_key PASSED [ 37%]
tests/system/test_system.py::test_query_unary PASSED [ 38%]
tests/system/test_system.py::test_collection_group_queries PASSED [ 40%]
tests/system/test_system.py::test_collection_group_queries_startat_endat PASSED [ 41%]
tests/system/test_system.py::test_collection_group_queries_filters PASSED [ 42%]
tests/system/test_system.py::test_partition_query_no_partitions PASSED [ 44%]
tests/system/test_system.py::test_partition_query PASSED [ 45%]
tests/system/test_system.py::test_get_all PASSED [ 46%]
tests/system/test_system.py::test_batch PASSED [ 48%]
tests/system/test_system.py::test_watch_document PASSED [ 49%]
tests/system/test_system.py::test_watch_collection PASSED [ 50%]
tests/system/test_system.py::test_watch_query PASSED [ 51%]
tests/system/test_system.py::test_array_union PASSED [ 53%]
tests/system/test_system.py::test_watch_query_order PASSED [ 54%]
tests/system/test_system_async.py::test_collections PASSED [ 55%]
tests/system/test_system_async.py::test_collections_w_import PASSED [ 57%]
tests/system/test_system_async.py::test_create_document PASSED [ 58%]
tests/system/test_system_async.py::test_create_document_w_subcollection PASSED [ 59%]
tests/system/test_system_async.py::test_cannot_use_foreign_key PASSED [ 61%]
tests/system/test_system_async.py::test_no_document PASSED [ 62%]
tests/system/test_system_async.py::test_document_set PASSED [ 63%]
tests/system/test_system_async.py::test_document_integer_field PASSED [ 64%]
tests/system/test_system_async.py::test_document_set_merge PASSED [ 66%]
tests/system/test_system_async.py::test_document_set_w_int_field PASSED [ 67%]
tests/system/test_system_async.py::test_document_update_w_int_field PASSED [ 68%]
tests/system/test_system_async.py::test_update_document PASSED [ 70%]
tests/system/test_system_async.py::test_document_get PASSED [ 71%]
tests/system/test_system_async.py::test_document_delete PASSED [ 72%]
tests/system/test_system_async.py::test_collection_add PASSED [ 74%]
tests/system/test_system_async.py::test_query_stream_w_simple_field_eq_op PASSED [ 75%]
tests/system/test_system_async.py::test_query_stream_w_simple_field_array_contains_op PASSED [ 76%]
tests/system/test_system_async.py::test_query_stream_w_simple_field_in_op PASSED [ 77%]
tests/system/test_system_async.py::test_query_stream_w_simple_field_array_contains_any_op PASSED [ 79%]
tests/system/test_system_async.py::test_query_stream_w_order_by PASSED [ 80%]
tests/system/test_system_async.py::test_query_stream_w_field_path PASSED [ 81%]
tests/system/test_system_async.py::test_query_stream_w_start_end_cursor PASSED [ 83%]
tests/system/test_system_async.py::test_query_stream_wo_results PASSED [ 84%]
tests/system/test_system_async.py::test_query_stream_w_projection PASSED [ 85%]
tests/system/test_system_async.py::test_query_stream_w_multiple_filters PASSED [ 87%]
tests/system/test_system_async.py::test_query_stream_w_offset PASSED [ 88%]
tests/system/test_system_async.py::test_query_with_order_dot_key PASSED [ 89%]
tests/system/test_system_async.py::test_query_unary PASSED [ 90%]
tests/system/test_system_async.py::test_collection_group_queries PASSED [ 92%]
tests/system/test_system_async.py::test_collection_group_queries_startat_endat PASSED [ 93%]
tests/system/test_system_async.py::test_collection_group_queries_filters PASSED [ 94%]
tests/system/test_system_async.py::test_partition_query_no_partitions PASSED [ 96%]
tests/system/test_system_async.py::test_partition_query PASSED [ 97%]
tests/system/test_system_async.py::test_get_all PASSED [ 98%]
tests/system/test_system_async.py::test_batch PASSED [100%]
=================== 77 passed in 211.00s (0:03:31) ===================
nox > Command py.test --verbose --junitxml=system_3.7_sponge_log.xml tests/system passed
nox > Session system-3.7 was successful.
real 3m34.561s
user 0m11.371s
sys 0m1.881s
cover
nox > Running session cover
nox > Creating virtual environment (virtualenv) using python3.8 in .nox/cover
nox > python -m pip install coverage pytest-cov
nox > Skipping coverage run, as --install-only is set.
nox > Skipping coverage run, as --install-only is set.
nox > Session cover was successful.
nox > Running session cover
nox > Re-using existing virtual environment at .nox/cover.
nox > python -m pip install coverage pytest-cov
nox > coverage report --show-missing --fail-under=100
Name Stmts Miss Branch BrPart Cover Missing
---------------------------------------------------------------------------------------------------------------------------------
google/cloud/firestore.py 35 0 0 0 100%
google/cloud/firestore_admin_v1/__init__.py 23 0 0 0 100%
google/cloud/firestore_admin_v1/services/__init__.py 0 0 0 0 100%
google/cloud/firestore_admin_v1/services/firestore_admin/__init__.py 3 0 0 0 100%
google/cloud/firestore_admin_v1/services/firestore_admin/async_client.py 168 0 38 0 100%
google/cloud/firestore_admin_v1/services/firestore_admin/client.py 282 0 90 0 100%
google/cloud/firestore_admin_v1/services/firestore_admin/pagers.py 82 0 20 0 100%
google/cloud/firestore_admin_v1/services/firestore_admin/transports/__init__.py 9 0 0 0 100%
google/cloud/firestore_admin_v1/services/firestore_admin/transports/base.py 72 0 12 0 100%
google/cloud/firestore_admin_v1/services/firestore_admin/transports/grpc.py 100 0 34 0 100%
google/cloud/firestore_admin_v1/services/firestore_admin/transports/grpc_asyncio.py 103 0 34 0 100%
google/cloud/firestore_admin_v1/types/__init__.py 6 0 0 0 100%
google/cloud/firestore_admin_v1/types/field.py 12 0 0 0 100%
google/cloud/firestore_admin_v1/types/firestore_admin.py 48 0 0 0 100%
google/cloud/firestore_admin_v1/types/index.py 28 0 0 0 100%
google/cloud/firestore_admin_v1/types/location.py 4 0 0 0 100%
google/cloud/firestore_admin_v1/types/operation.py 57 0 0 0 100%
google/cloud/firestore_bundle/__init__.py 7 0 0 0 100%
google/cloud/firestore_bundle/_helpers.py 4 0 0 0 100%
google/cloud/firestore_bundle/bundle.py 111 0 32 0 100%
google/cloud/firestore_bundle/services/__init__.py 0 0 0 0 100%
google/cloud/firestore_bundle/types/__init__.py 2 0 0 0 100%
google/cloud/firestore_bundle/types/bundle.py 33 0 0 0 100%
google/cloud/firestore_v1/__init__.py 37 0 0 0 100%
google/cloud/firestore_v1/_helpers.py 478 0 240 0 100%
google/cloud/firestore_v1/async_batch.py 19 0 2 0 100%
google/cloud/firestore_v1/async_client.py 41 0 4 0 100%
google/cloud/firestore_v1/async_collection.py 32 0 4 0 100%
google/cloud/firestore_v1/async_document.py 44 0 4 0 100%
google/cloud/firestore_v1/async_query.py 45 0 16 0 100%
google/cloud/firestore_v1/async_transaction.py 98 0 22 0 100%
google/cloud/firestore_v1/base_batch.py 32 0 4 0 100%
google/cloud/firestore_v1/base_client.py 151 0 42 0 100%
google/cloud/firestore_v1/base_collection.py 101 0 16 0 100%
google/cloud/firestore_v1/base_document.py 145 0 24 0 100%
google/cloud/firestore_v1/base_query.py 331 0 130 0 100%
google/cloud/firestore_v1/base_transaction.py 65 0 6 0 100%
google/cloud/firestore_v1/batch.py 19 0 2 0 100%
google/cloud/firestore_v1/client.py 42 0 4 0 100%
google/cloud/firestore_v1/collection.py 30 0 2 0 100%
google/cloud/firestore_v1/document.py 48 0 4 0 100%
google/cloud/firestore_v1/field_path.py 135 0 56 0 100%
google/cloud/firestore_v1/order.py 130 0 54 0 100%
google/cloud/firestore_v1/query.py 47 0 14 0 100%
google/cloud/firestore_v1/services/__init__.py 0 0 0 0 100%
google/cloud/firestore_v1/services/firestore/__init__.py 3 0 0 0 100%
google/cloud/firestore_v1/services/firestore/async_client.py 178 0 30 0 100%
google/cloud/firestore_v1/services/firestore/client.py 276 0 90 0 100%
google/cloud/firestore_v1/services/firestore/pagers.py 121 0 30 0 100%
google/cloud/firestore_v1/services/firestore/transports/__init__.py 9 0 0 0 100%
google/cloud/firestore_v1/services/firestore/transports/base.py 80 0 12 0 100%
google/cloud/firestore_v1/services/firestore/transports/grpc.py 122 0 44 0 100%
google/cloud/firestore_v1/services/firestore/transports/grpc_asyncio.py 125 0 44 0 100%
google/cloud/firestore_v1/transaction.py 97 0 22 0 100%
google/cloud/firestore_v1/transforms.py 39 0 10 0 100%
google/cloud/firestore_v1/types/__init__.py 6 0 0 0 100%
google/cloud/firestore_v1/types/common.py 16 0 0 0 100%
google/cloud/firestore_v1/types/document.py 27 0 0 0 100%
google/cloud/firestore_v1/types/firestore.py 157 0 0 0 100%
google/cloud/firestore_v1/types/query.py 66 0 0 0 100%
google/cloud/firestore_v1/types/write.py 45 0 0 0 100%
google/cloud/firestore_v1/watch.py 325 0 78 0 100%
tests/unit/__init__.py 0 0 0 0 100%
tests/unit/test_firestore_shim.py 10 0 2 0 100%
tests/unit/v1/__init__.py 0 0 0 0 100%
tests/unit/v1/_test_helpers.py 22 0 0 0 100%
tests/unit/v1/conformance_tests.py 106 0 0 0 100%
tests/unit/v1/test__helpers.py 1653 0 36 0 100%
tests/unit/v1/test_async_batch.py 98 0 0 0 100%
tests/unit/v1/test_async_client.py 267 0 18 0 100%
tests/unit/v1/test_async_collection.py 223 0 20 0 100%
tests/unit/v1/test_async_document.py 334 0 32 0 100%
tests/unit/v1/test_async_query.py 327 0 26 0 100%
tests/unit/v1/test_async_transaction.py 584 0 0 0 100%
tests/unit/v1/test_base_batch.py 98 0 0 0 100%
tests/unit/v1/test_base_client.py 238 0 0 0 100%
tests/unit/v1/test_base_collection.py 239 0 0 0 100%
tests/unit/v1/test_base_document.py 293 0 2 0 100%
tests/unit/v1/test_base_query.py 1006 0 20 0 100%
tests/unit/v1/test_base_transaction.py 75 0 0 0 100%
tests/unit/v1/test_batch.py 92 0 0 0 100%
tests/unit/v1/test_bundle.py 268 0 4 0 100%
tests/unit/v1/test_client.py 256 0 12 0 100%
tests/unit/v1/test_collection.py 197 0 10 0 100%
tests/unit/v1/test_cross_language.py 207 0 82 0 100%
tests/unit/v1/test_document.py 307 0 26 0 100%
tests/unit/v1/test_field_path.py 355 0 8 0 100%
tests/unit/v1/test_order.py 138 0 8 0 100%
tests/unit/v1/test_query.py 318 0 0 0 100%
tests/unit/v1/test_transaction.py 560 0 0 0 100%
tests/unit/v1/test_transforms.py 78 0 8 0 100%
tests/unit/v1/test_watch.py 667 0 4 0 100%
---------------------------------------------------------------------------------------------------------------------------------
TOTAL 13967 0 1588 0 100%
nox > coverage erase
nox > Session cover was successful.
real 0m3.581s
user 0m3.419s
sys 0m0.163s
docs
nox > Running session docs
nox > Creating virtual environment (virtualenv) using python3.8 in .nox/docs
nox > python -m pip install -e .
nox > python -m pip install sphinx==4.0.1 alabaster recommonmark
nox > Skipping sphinx-build run, as --install-only is set.
nox > Session docs was successful.
nox > Running session docs
nox > Re-using existing virtual environment at .nox/docs.
nox > python -m pip install -e .
nox > python -m pip install sphinx==4.0.1 alabaster recommonmark
nox > sphinx-build -W -T -N -b html -d docs/_build/doctrees/ docs/ docs/_build/html/
Running Sphinx v4.0.1
making output directory... done
[autosummary] generating autosummary for: README.rst, UPGRADING.md, admin_client.rst, batch.rst, changelog.md, client.rst, collection.rst, document.rst, field_path.rst, index.rst, multiprocessing.rst, query.rst, transaction.rst, transforms.rst, types.rst
loading intersphinx inventory from https://python.readthedocs.org/en/latest/objects.inv...
loading intersphinx inventory from https://googleapis.dev/python/google-auth/latest/objects.inv...
loading intersphinx inventory from https://googleapis.dev/python/google-api-core/latest/objects.inv...
loading intersphinx inventory from https://grpc.github.io/grpc/python/objects.inv...
loading intersphinx inventory from https://proto-plus-python.readthedocs.io/en/latest/objects.inv...
loading intersphinx inventory from https://googleapis.dev/python/protobuf/latest/objects.inv...
intersphinx inventory has moved: https://python.readthedocs.org/en/latest/objects.inv -> https://python.readthedocs.io/en/latest/objects.inv
building [mo]: targets for 0 po files that are out of date
building [html]: targets for 15 source files that are out of date
updating environment: [new config] 15 added, 0 changed, 0 removed
reading sources... [ 6%] README
reading sources... [ 13%] UPGRADING
/home/tseaver/projects/agendaless/Google/src/python-firestore/.nox/docs/lib/python3.8/site-packages/recommonmark/parser.py:75: UserWarning: Container node skipped: type=document
warn(""Container node skipped: type={0}"".format(mdnode.t))
reading sources... [ 20%] admin_client
reading sources... [ 26%] batch
reading sources... [ 33%] changelog
/home/tseaver/projects/agendaless/Google/src/python-firestore/.nox/docs/lib/python3.8/site-packages/recommonmark/parser.py:75: UserWarning: Container node skipped: type=document
warn(""Container node skipped: type={0}"".format(mdnode.t))
reading sources... [ 40%] client
reading sources... [ 46%] collection
reading sources... [ 53%] document
reading sources... [ 60%] field_path
reading sources... [ 66%] index
reading sources... [ 73%] multiprocessing
reading sources... [ 80%] query
reading sources... [ 86%] transaction
reading sources... [ 93%] transforms
reading sources... [100%] types
looking for now-outdated files... none found
pickling environment... done
checking consistency... done
preparing documents... done
writing output... [ 6%] README
writing output... [ 13%] UPGRADING
writing output... [ 20%] admin_client
writing output... [ 26%] batch
writing output... [ 33%] changelog
writing output... [ 40%] client
writing output... [ 46%] collection
writing output... [ 53%] document
writing output... [ 60%] field_path
writing output... [ 66%] index
writing output... [ 73%] multiprocessing
writing output... [ 80%] query
writing output... [ 86%] transaction
writing output... [ 93%] transforms
writing output... [100%] types
generating indices... genindex py-modindex done
highlighting module code... [ 3%] google.cloud.firestore_admin_v1.services.firestore_admin.client
highlighting module code... [ 7%] google.cloud.firestore_v1.async_batch
highlighting module code... [ 11%] google.cloud.firestore_v1.async_client
highlighting module code... [ 15%] google.cloud.firestore_v1.async_collection
highlighting module code... [ 19%] google.cloud.firestore_v1.async_document
highlighting module code... [ 23%] google.cloud.firestore_v1.async_query
highlighting module code... [ 26%] google.cloud.firestore_v1.async_transaction
highlighting module code... [ 30%] google.cloud.firestore_v1.base_batch
highlighting module code... [ 34%] google.cloud.firestore_v1.base_client
highlighting module code... [ 38%] google.cloud.firestore_v1.base_collection
highlighting module code... [ 42%] google.cloud.firestore_v1.base_document
highlighting module code... [ 46%] google.cloud.firestore_v1.base_query
highlighting module code... [ 50%] google.cloud.firestore_v1.base_transaction
highlighting module code... [ 53%] google.cloud.firestore_v1.batch
highlighting module code... [ 57%] google.cloud.firestore_v1.client
highlighting module code... [ 61%] google.cloud.firestore_v1.collection
highlighting module code... [ 65%] google.cloud.firestore_v1.document
highlighting module code... [ 69%] google.cloud.firestore_v1.field_path
highlighting module code... [ 73%] google.cloud.firestore_v1.query
highlighting module code... [ 76%] google.cloud.firestore_v1.transaction
highlighting module code... [ 80%] google.cloud.firestore_v1.transforms
highlighting module code... [ 84%] google.cloud.firestore_v1.types.common
highlighting module code... [ 88%] google.cloud.firestore_v1.types.document
highlighting module code... [ 92%] google.cloud.firestore_v1.types.firestore
highlighting module code... [ 96%] google.cloud.firestore_v1.types.query
highlighting module code... [100%] google.cloud.firestore_v1.types.write
writing additional pages... search done
copying static files... done
copying extra files... done
dumping search index in English (code: en)... done
dumping object inventory... done
build succeeded.
The HTML pages are in docs/_build/html.
nox > Session docs was successful.
real 0m12.548s
user 0m12.024s
sys 0m0.354s
```
Given that the system tests take 3 - 4 minutes to run, ISTM it would be good to break them out into a separate Kokoro job, running in parallel with the other test.
This change will require updates to the google3 internal configuration for Kokoro, similar to those @tswast made to enable them for googleapis/python-bigtable#390.",0,split out system tests into separate kokoro job working to reduce ci latency here are timings on my local machine note the pre run with install only to avoid measuring virtualenv creation time bash for job in nox list grep cut d f do echo job nox e job install only time nox re job done lint nox running session lint nox creating virtual environment virtualenv using in nox lint nox python m pip install black nox skipping black run as install only is set nox skipping run as install only is set nox session lint was successful nox running session lint nox re using existing virtual environment at nox lint nox python m pip install black nox black check docs google tests noxfile py setup py all done ✨ 🍰 ✨ files would be left unchanged nox google tests nox session lint was successful real user sys blacken nox running session blacken nox creating virtual environment virtualenv using in nox blacken nox python m pip install black nox skipping black run as install only is set nox session blacken was successful nox running session blacken nox re using existing virtual environment at nox blacken nox python m pip install black nox black docs google tests noxfile py setup py all done ✨ 🍰 ✨ files left unchanged nox session blacken was successful real user sys lint setup py nox running session lint setup py nox creating virtual environment virtualenv using in nox lint setup py nox python m pip install docutils pygments nox skipping python run as install only is set nox session lint setup py was successful nox running session lint setup py nox re using existing virtual environment at nox lint setup py nox python m pip install docutils pygments nox python setup py check restructuredtext strict running check nox session lint setup py was successful real user sys unit nox running session unit nox creating virtual environment virtualenv using in nox unit nox python m pip install asyncmock pytest asyncio c home tseaver projects agendaless google src python firestore testing constraints txt nox python m pip install mock pytest pytest cov aiounittest c home tseaver projects agendaless google src python firestore testing constraints txt nox python m pip install e c home tseaver projects agendaless google src python firestore testing constraints txt nox skipping py test run as install only is set nox session unit was successful nox running session unit nox re using existing virtual environment at nox unit nox python m pip install asyncmock pytest asyncio c home tseaver projects agendaless google src python firestore testing constraints txt nox python m pip install mock pytest pytest cov aiounittest c home tseaver projects agendaless google src python firestore testing constraints txt nox python m pip install e c home tseaver projects agendaless google src python firestore testing constraints txt nox py test quiet junitxml unit sponge log xml cov google cloud cov tests unit cov append cov config coveragerc cov report cov fail under tests unit s s ss s s ss ssssssssssss ssssssssssssssssssssssssssssssss generated xml file home tseaver projects agendaless google src python firestore unit sponge log xml passed skipped in nox session unit was successful real user sys unit nox running session unit nox creating virtual environment virtualenv using in nox unit nox python m pip install asyncmock pytest asyncio c home tseaver projects agendaless google src python firestore testing constraints txt nox python m pip install mock pytest pytest cov aiounittest c home tseaver projects agendaless google src python firestore testing constraints txt nox python m pip install e c home tseaver projects agendaless google src python firestore testing constraints txt nox skipping py test run as install only is set nox session unit was successful nox running session unit nox re using existing virtual environment at nox unit nox python m pip install asyncmock pytest asyncio c home tseaver projects agendaless google src python firestore testing constraints txt nox python m pip install mock pytest pytest cov aiounittest c home tseaver projects agendaless google src python firestore testing constraints txt nox python m pip install e c home tseaver projects agendaless google src python firestore testing constraints txt nox py test quiet junitxml unit sponge log xml cov google cloud cov tests unit cov append cov config coveragerc cov report cov fail under tests unit s s ss s s ss ssssssssssss ssssssssssssssssssssssssssssssss generated xml file home tseaver projects agendaless google src python firestore unit sponge log xml passed skipped in nox session unit was successful real user sys unit nox running session unit nox creating virtual environment virtualenv using in nox unit nox python m pip install asyncmock pytest asyncio c home tseaver projects agendaless google src python firestore testing constraints txt nox python m pip install mock pytest pytest cov aiounittest c home tseaver projects agendaless google src python firestore testing constraints txt nox python m pip install e c home tseaver projects agendaless google src python firestore testing constraints txt nox skipping py test run as install only is set nox session unit was successful nox running session unit nox re using existing virtual environment at nox unit nox python m pip install asyncmock pytest asyncio c home tseaver projects agendaless google src python firestore testing constraints txt nox python m pip install mock pytest pytest cov aiounittest c home tseaver projects agendaless google src python firestore testing constraints txt nox python m pip install e c home tseaver projects agendaless google src python firestore testing constraints txt nox py test quiet junitxml unit sponge log xml cov google cloud cov tests unit cov append cov config coveragerc cov report cov fail under tests unit s s ss s s ss ssssssssssss ssssssssssssssssssssssssssssssss generated xml file home tseaver projects agendaless google src python firestore unit sponge log xml passed skipped in nox session unit was successful real user sys unit nox running session unit nox creating virtual environment virtualenv using in nox unit nox python m pip install asyncmock pytest asyncio c home tseaver projects agendaless google src python firestore testing constraints txt nox python m pip install mock pytest pytest cov aiounittest c home tseaver projects agendaless google src python firestore testing constraints txt nox python m pip install e c home tseaver projects agendaless google src python firestore testing constraints txt nox skipping py test run as install only is set nox session unit was successful nox running session unit nox re using existing virtual environment at nox unit nox python m pip install asyncmock pytest asyncio c home tseaver projects agendaless google src python firestore testing constraints txt nox python m pip install mock pytest pytest cov aiounittest c home tseaver projects agendaless google src python firestore testing constraints txt nox python m pip install e c home tseaver projects agendaless google src python firestore testing constraints txt nox py test quiet junitxml unit sponge log xml cov google cloud cov tests unit cov append cov config coveragerc cov report cov fail under tests unit s s ss s s ss ssssssssssss ssssssssssssssssssssssssssssssss generated xml file home tseaver projects agendaless google src python firestore unit sponge log xml passed skipped in nox session unit was successful real user sys system nox running session system nox creating virtual environment virtualenv using in nox system nox python m pip install pre grpcio nox python m pip install mock pytest google cloud testutils pytest asyncio c home tseaver projects agendaless google src python firestore testing constraints txt nox python m pip install e c home tseaver projects agendaless google src python firestore testing constraints txt nox skipping py test run as install only is set nox session system was successful nox running session system nox re using existing virtual environment at nox system nox python m pip install pre grpcio nox python m pip install mock pytest google cloud testutils pytest asyncio c home tseaver projects agendaless google src python firestore testing constraints txt nox python m pip install e c home tseaver projects agendaless google src python firestore testing constraints txt nox py test verbose junitxml system sponge log xml tests system test session starts platform linux python pytest py pluggy home tseaver projects agendaless google src python firestore nox system bin python cachedir pytest cache rootdir home tseaver projects agendaless google src python firestore plugins asyncio collected items tests system test system py test collections passed tests system test system py test collections w import passed tests system test system py test create document passed tests system test system py test create document w subcollection passed tests system test system py test cannot use foreign key passed tests system test system py test no document passed tests system test system py test document set passed tests system test system py test document integer field passed tests system test system py test document set merge passed tests system test system py test document set w int field passed tests system test system py test document update w int field passed tests system test system py test update document passed tests system test system py test document get passed tests system test system py test document delete passed tests system test system py test collection add passed tests system test system py test query stream w simple field eq op passed tests system test system py test query stream w simple field array contains op passed tests system test system py test query stream w simple field in op passed tests system test system py test query stream w not eq op passed tests system test system py test query stream w simple not in op passed tests system test system py test query stream w simple field array contains any op passed tests system test system py test query stream w order by passed tests system test system py test query stream w field path passed tests system test system py test query stream w start end cursor passed tests system test system py test query stream wo results passed tests system test system py test query stream w projection passed tests system test system py test query stream w multiple filters passed tests system test system py test query stream w offset passed tests system test system py test query with order dot key passed tests system test system py test query unary passed tests system test system py test collection group queries passed tests system test system py test collection group queries startat endat passed tests system test system py test collection group queries filters passed tests system test system py test partition query no partitions passed tests system test system py test partition query passed tests system test system py test get all passed tests system test system py test batch passed tests system test system py test watch document passed tests system test system py test watch collection passed tests system test system py test watch query passed tests system test system py test array union passed tests system test system py test watch query order passed tests system test system async py test collections passed tests system test system async py test collections w import passed tests system test system async py test create document passed tests system test system async py test create document w subcollection passed tests system test system async py test cannot use foreign key passed tests system test system async py test no document passed tests system test system async py test document set passed tests system test system async py test document integer field passed tests system test system async py test document set merge passed tests system test system async py test document set w int field passed tests system test system async py test document update w int field passed tests system test system async py test update document passed tests system test system async py test document get passed tests system test system async py test document delete passed tests system test system async py test collection add passed tests system test system async py test query stream w simple field eq op passed tests system test system async py test query stream w simple field array contains op passed tests system test system async py test query stream w simple field in op passed tests system test system async py test query stream w simple field array contains any op passed tests system test system async py test query stream w order by passed tests system test system async py test query stream w field path passed tests system test system async py test query stream w start end cursor passed tests system test system async py test query stream wo results passed tests system test system async py test query stream w projection passed tests system test system async py test query stream w multiple filters passed tests system test system async py test query stream w offset passed tests system test system async py test query with order dot key passed tests system test system async py test query unary passed tests system test system async py test collection group queries passed tests system test system async py test collection group queries startat endat passed tests system test system async py test collection group queries filters passed tests system test system async py test partition query no partitions passed tests system test system async py test partition query passed tests system test system async py test get all passed tests system test system async py test batch passed passed in nox command py test verbose junitxml system sponge log xml tests system passed nox session system was successful real user sys cover nox running session cover nox creating virtual environment virtualenv using in nox cover nox python m pip install coverage pytest cov nox skipping coverage run as install only is set nox skipping coverage run as install only is set nox session cover was successful nox running session cover nox re using existing virtual environment at nox cover nox python m pip install coverage pytest cov nox coverage report show missing fail under name stmts miss branch brpart cover missing google cloud firestore py google cloud firestore admin init py google cloud firestore admin services init py google cloud firestore admin services firestore admin init py google cloud firestore admin services firestore admin async client py google cloud firestore admin services firestore admin client py google cloud firestore admin services firestore admin pagers py google cloud firestore admin services firestore admin transports init py google cloud firestore admin services firestore admin transports base py google cloud firestore admin services firestore admin transports grpc py google cloud firestore admin services firestore admin transports grpc asyncio py google cloud firestore admin types init py google cloud firestore admin types field py google cloud firestore admin types firestore admin py google cloud firestore admin types index py google cloud firestore admin types location py google cloud firestore admin types operation py google cloud firestore bundle init py google cloud firestore bundle helpers py google cloud firestore bundle bundle py google cloud firestore bundle services init py google cloud firestore bundle types init py google cloud firestore bundle types bundle py google cloud firestore init py google cloud firestore helpers py google cloud firestore async batch py google cloud firestore async client py google cloud firestore async collection py google cloud firestore async document py google cloud firestore async query py google cloud firestore async transaction py google cloud firestore base batch py google cloud firestore base client py google cloud firestore base collection py google cloud firestore base document py google cloud firestore base query py google cloud firestore base transaction py google cloud firestore batch py google cloud firestore client py google cloud firestore collection py google cloud firestore document py google cloud firestore field path py google cloud firestore order py google cloud firestore query py google cloud firestore services init py google cloud firestore services firestore init py google cloud firestore services firestore async client py google cloud firestore services firestore client py google cloud firestore services firestore pagers py google cloud firestore services firestore transports init py google cloud firestore services firestore transports base py google cloud firestore services firestore transports grpc py google cloud firestore services firestore transports grpc asyncio py google cloud firestore transaction py google cloud firestore transforms py google cloud firestore types init py google cloud firestore types common py google cloud firestore types document py google cloud firestore types firestore py google cloud firestore types query py google cloud firestore types write py google cloud firestore watch py tests unit init py tests unit test firestore shim py tests unit init py tests unit test helpers py tests unit conformance tests py tests unit test helpers py tests unit test async batch py tests unit test async client py tests unit test async collection py tests unit test async document py tests unit test async query py tests unit test async transaction py tests unit test base batch py tests unit test base client py tests unit test base collection py tests unit test base document py tests unit test base query py tests unit test base transaction py tests unit test batch py tests unit test bundle py tests unit test client py tests unit test collection py tests unit test cross language py tests unit test document py tests unit test field path py tests unit test order py tests unit test query py tests unit test transaction py tests unit test transforms py tests unit test watch py total nox coverage erase nox session cover was successful real user sys docs nox running session docs nox creating virtual environment virtualenv using in nox docs nox python m pip install e nox python m pip install sphinx alabaster recommonmark nox skipping sphinx build run as install only is set nox session docs was successful nox running session docs nox re using existing virtual environment at nox docs nox python m pip install e nox python m pip install sphinx alabaster recommonmark nox sphinx build w t n b html d docs build doctrees docs docs build html running sphinx making output directory done generating autosummary for readme rst upgrading md admin client rst batch rst changelog md client rst collection rst document rst field path rst index rst multiprocessing rst query rst transaction rst transforms rst types rst loading intersphinx inventory from loading intersphinx inventory from loading intersphinx inventory from loading intersphinx inventory from loading intersphinx inventory from loading intersphinx inventory from intersphinx inventory has moved building targets for po files that are out of date building targets for source files that are out of date updating environment added changed removed reading sources readme reading sources upgrading home tseaver projects agendaless google src python firestore nox docs lib site packages recommonmark parser py userwarning container node skipped type document warn container node skipped type format mdnode t reading sources admin client reading sources batch reading sources changelog home tseaver projects agendaless google src python firestore nox docs lib site packages recommonmark parser py userwarning container node skipped type document warn container node skipped type format mdnode t reading sources client reading sources collection reading sources document reading sources field path reading sources index reading sources multiprocessing reading sources query reading sources transaction reading sources transforms reading sources types looking for now outdated files none found pickling environment done checking consistency done preparing documents done writing output readme writing output upgrading writing output admin client writing output batch writing output changelog writing output client writing output collection writing output document writing output field path writing output index writing output multiprocessing writing output query writing output transaction writing output transforms writing output types generating indices genindex py modindex done highlighting module code google cloud firestore admin services firestore admin client highlighting module code google cloud firestore async batch highlighting module code google cloud firestore async client highlighting module code google cloud firestore async collection highlighting module code google cloud firestore async document highlighting module code google cloud firestore async query highlighting module code google cloud firestore async transaction highlighting module code google cloud firestore base batch highlighting module code google cloud firestore base client highlighting module code google cloud firestore base collection highlighting module code google cloud firestore base document highlighting module code google cloud firestore base query highlighting module code google cloud firestore base transaction highlighting module code google cloud firestore batch highlighting module code google cloud firestore client highlighting module code google cloud firestore collection highlighting module code google cloud firestore document highlighting module code google cloud firestore field path highlighting module code google cloud firestore query highlighting module code google cloud firestore transaction highlighting module code google cloud firestore transforms highlighting module code google cloud firestore types common highlighting module code google cloud firestore types document highlighting module code google cloud firestore types firestore highlighting module code google cloud firestore types query highlighting module code google cloud firestore types write writing additional pages search done copying static files done copying extra files done dumping search index in english code en done dumping object inventory done build succeeded the html pages are in docs build html nox session docs was successful real user sys given that the system tests take minutes to run istm it would be good to break them out into a separate kokoro job running in parallel with the other test this change will require updates to the internal configuration for kokoro similar to those tswast made to enable them for googleapis python bigtable ,0
3836,14674163917.0,IssuesEvent,2020-12-30 14:44:58,z0ph/status,https://api.github.com/repos/z0ph/status,closed,🛑 Home Automation is down,home-automation status,"In [`6277ed7`](https://github.com/z0ph/status/commit/6277ed7dc02f689ab1eeb45b909d514187a126ef
), Home Automation ($HOME_AUTOMATION) was **down**:
- HTTP code: 0
- Response time: 0 ms
",1.0,"🛑 Home Automation is down - In [`6277ed7`](https://github.com/z0ph/status/commit/6277ed7dc02f689ab1eeb45b909d514187a126ef
), Home Automation ($HOME_AUTOMATION) was **down**:
- HTTP code: 0
- Response time: 0 ms
",1,🛑 home automation is down in home automation home automation was down http code response time ms ,1
63701,14656763821.0,IssuesEvent,2020-12-28 14:08:44,fu1771695yongxie/vue-router,https://api.github.com/repos/fu1771695yongxie/vue-router,opened,CVE-2019-14863 (Medium) detected in angular-1.4.2.min.js,security vulnerability,"## CVE-2019-14863 - Medium Severity Vulnerability
Vulnerable Library - angular-1.4.2.min.js
AngularJS is an MVC framework for building web applications. The core features include HTML enhanced with custom component and data-binding capabilities, dependency injection and strong focus on simplicity, testability, maintainability and boiler-plate reduction.
Path to dependency file: vue-router/node_modules/autocomplete.js/test/playground_angular.html
Path to vulnerable library: vue-router/node_modules/autocomplete.js/test/playground_angular.html,vue-router/node_modules/autocomplete.js/examples/basic_angular.html
There is a vulnerability in all angular versions before 1.5.0-beta.0, where after escaping the context of the web application, the web application delivers data to its users along with other trusted dynamic content, without validating it.
***
Step up your Open Source Security Game with WhiteSource [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)",True,"CVE-2019-14863 (Medium) detected in angular-1.4.2.min.js - ## CVE-2019-14863 - Medium Severity Vulnerability
Vulnerable Library - angular-1.4.2.min.js
AngularJS is an MVC framework for building web applications. The core features include HTML enhanced with custom component and data-binding capabilities, dependency injection and strong focus on simplicity, testability, maintainability and boiler-plate reduction.
Path to dependency file: vue-router/node_modules/autocomplete.js/test/playground_angular.html
Path to vulnerable library: vue-router/node_modules/autocomplete.js/test/playground_angular.html,vue-router/node_modules/autocomplete.js/examples/basic_angular.html
There is a vulnerability in all angular versions before 1.5.0-beta.0, where after escaping the context of the web application, the web application delivers data to its users along with other trusted dynamic content, without validating it.
***
Step up your Open Source Security Game with WhiteSource [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)",0,cve medium detected in angular min js cve medium severity vulnerability vulnerable library angular min js angularjs is an mvc framework for building web applications the core features include html enhanced with custom component and data binding capabilities dependency injection and strong focus on simplicity testability maintainability and boiler plate reduction library home page a href path to dependency file vue router node modules autocomplete js test playground angular html path to vulnerable library vue router node modules autocomplete js test playground angular html vue router node modules autocomplete js examples basic angular html dependency hierarchy x angular min js vulnerable library found in head commit a href found in base branch dev vulnerability details there is a vulnerability in all angular versions before beta where after escaping the context of the web application the web application delivers data to its users along with other trusted dynamic content without validating it publish date url a href cvss score details base score metrics exploitability metrics attack vector network attack complexity low privileges required none user interaction required scope changed impact metrics confidentiality impact low integrity impact low availability impact none for more information on scores click a href suggested fix type upgrade version origin a href release date fix resolution angular beta org webjars angularjs rc step up your open source security game with whitesource ,0
102015,4149722155.0,IssuesEvent,2016-06-15 15:14:40,ArdaCraft/IssueTracker,https://api.github.com/repos/ArdaCraft/IssueTracker,closed,No Physics,high priority plugin,A plugin to prevent vanilla physics/block updates so that blocks can be placed in special ways (such as floating torches),1.0,No Physics - A plugin to prevent vanilla physics/block updates so that blocks can be placed in special ways (such as floating torches),0,no physics a plugin to prevent vanilla physics block updates so that blocks can be placed in special ways such as floating torches ,0
182312,14114604065.0,IssuesEvent,2020-11-07 16:46:29,compare-ci/admin,https://api.github.com/repos/compare-ci/admin,closed,Automated test 1604767395.825574,Test,"This is a tracking issue for the automated tests being run. Test id: `automated-test-1604767395.825574`
|[python-sum](https://github.com/compare-ci/python-sum/pull/1188)|Pull Created|Check Start|Check End|Total|Check|
|-|-|-|-|-|-|
|CircleCI Checks|16:43:20|16:43:21|16:43:30|0:00:10|0:00:09|
|Travis CI|16:43:20|16:43:42|16:44:00|0:00:40|0:00:18|
|Azure Pipelines|16:43:20|16:45:54|16:46:03|0:02:43|0:00:09|
|[node-sum](https://github.com/compare-ci/node-sum/pull/1169)|Pull Created|Check Start|Check End|Total|Check|
|-|-|-|-|-|-|
|CircleCI Checks|16:43:25|16:43:26|16:44:26|0:01:01|0:01:00|
|Travis CI|16:43:25|16:43:45|16:44:27|0:01:02|0:00:42|
|GitHub Actions|16:43:25|16:43:38|16:43:58|0:00:33|0:00:20|
|Azure Pipelines|16:43:25|16:45:47|16:46:11|0:02:46|0:00:24|
",1.0,"Automated test 1604767395.825574 - This is a tracking issue for the automated tests being run. Test id: `automated-test-1604767395.825574`
|[python-sum](https://github.com/compare-ci/python-sum/pull/1188)|Pull Created|Check Start|Check End|Total|Check|
|-|-|-|-|-|-|
|CircleCI Checks|16:43:20|16:43:21|16:43:30|0:00:10|0:00:09|
|Travis CI|16:43:20|16:43:42|16:44:00|0:00:40|0:00:18|
|Azure Pipelines|16:43:20|16:45:54|16:46:03|0:02:43|0:00:09|
|[node-sum](https://github.com/compare-ci/node-sum/pull/1169)|Pull Created|Check Start|Check End|Total|Check|
|-|-|-|-|-|-|
|CircleCI Checks|16:43:25|16:43:26|16:44:26|0:01:01|0:01:00|
|Travis CI|16:43:25|16:43:45|16:44:27|0:01:02|0:00:42|
|GitHub Actions|16:43:25|16:43:38|16:43:58|0:00:33|0:00:20|
|Azure Pipelines|16:43:25|16:45:47|16:46:11|0:02:46|0:00:24|
",0,automated test this is a tracking issue for the automated tests being run test id automated test created check start check end total check circleci checks travis ci azure pipelines created check start check end total check circleci checks travis ci github actions azure pipelines ,0
3274,13309679472.0,IssuesEvent,2020-08-26 04:45:37,MicrosoftDocs/azure-docs,https://api.github.com/repos/MicrosoftDocs/azure-docs,closed,insufficient privileges issue from powershell workflow runbook.,Pri2 automation/svc cxp product-question shared-capabilities/subsvc triaged,"
Article is not describing about how to use AD registered applications from azure powershell workflow runbooks while facing the insufficient privileges.
Able to connect to the Acitive Directory with conenct-azuread cmdlet. But then after facing the insufficient privileges issue.
---
#### Document Details
⚠ *Do not edit this section. It is required for docs.microsoft.com ➟ GitHub issue linking.*
* ID: 56e2500f-e1f5-bc87-6e5c-f41b59265049
* Version Independent ID: d212be48-7d05-847d-3045-cea82e6ba603
* Content: [Manage an Azure Automation Run As account](https://docs.microsoft.com/en-us/azure/automation/manage-runas-account)
* Content Source: [articles/automation/manage-runas-account.md](https://github.com/MicrosoftDocs/azure-docs/blob/master/articles/automation/manage-runas-account.md)
* Service: **automation**
* Sub-service: **shared-capabilities**
* GitHub Login: @MGoedtel
* Microsoft Alias: **magoedte**",1.0,"insufficient privileges issue from powershell workflow runbook. -
Article is not describing about how to use AD registered applications from azure powershell workflow runbooks while facing the insufficient privileges.
Able to connect to the Acitive Directory with conenct-azuread cmdlet. But then after facing the insufficient privileges issue.
---
#### Document Details
⚠ *Do not edit this section. It is required for docs.microsoft.com ➟ GitHub issue linking.*
* ID: 56e2500f-e1f5-bc87-6e5c-f41b59265049
* Version Independent ID: d212be48-7d05-847d-3045-cea82e6ba603
* Content: [Manage an Azure Automation Run As account](https://docs.microsoft.com/en-us/azure/automation/manage-runas-account)
* Content Source: [articles/automation/manage-runas-account.md](https://github.com/MicrosoftDocs/azure-docs/blob/master/articles/automation/manage-runas-account.md)
* Service: **automation**
* Sub-service: **shared-capabilities**
* GitHub Login: @MGoedtel
* Microsoft Alias: **magoedte**",1,insufficient privileges issue from powershell workflow runbook article is not describing about how to use ad registered applications from azure powershell workflow runbooks while facing the insufficient privileges able to connect to the acitive directory with conenct azuread cmdlet but then after facing the insufficient privileges issue document details ⚠ do not edit this section it is required for docs microsoft com ➟ github issue linking id version independent id content content source service automation sub service shared capabilities github login mgoedtel microsoft alias magoedte ,1
51190,21582629884.0,IssuesEvent,2022-05-02 20:30:06,Azure/azure-powershell,https://api.github.com/repos/Azure/azure-powershell,closed,"Add-AzVHD - Error with path with ampersand (&), even if properly quoted",Compute Service Attention bug question customer-reported,"### Description
`Add-AzVHD` will fail at the MD5 calculation step if the path to the VHD contains an ampersand, which is a legal path character on Windows.
The problem is here:
https://github.com/Azure/azure-powershell/blob/ebc4710853f09e2e16d33ff479fd2ee9b45a8156/src/Compute/Compute/Models/PSSyncOutputEvents.cs#L108-L116
I'm not sure why it's doing this instead of just calling [WriteProgress](https://docs.microsoft.com/en-us/dotnet/api/system.management.automation.cmdlet.writeprogress), `WriteInformation`, or similar.
There seems to be a lot of what is effectively ""eval""ing in this file and it should probably all be audited for injection vulnerabilities.
### Environment data
```PowerShell
Name Value
---- -----
PSVersion 5.1.20348.320
PSEdition Desktop
PSCompatibleVersions {1.0, 2.0, 3.0, 4.0...}
BuildVersion 10.0.20348.320
CLRVersion 4.0.30319.42000
WSManStackVersion 3.0
PSRemotingProtocolVersion 2.3
SerializationVersion 1.1.0.1
```
### Module versions
```PowerShell
ModuleType Version Name
---------- ------- ----
Script 2.7.1 Az.Accounts
Script 4.23.0 Az.Compute
```
### Error output
```PowerShell
Message : At line:1 char:79
+ ... sh is being calculated for the file 'E:\my vms & more ...
+ ~
The ampersand (&) character is not allowed. The & operator is reserved for future use; wrap an ampersand in double quotation marks
(""&"") to pass it as part of a string.
StackTrace : at System.Management.Automation.Runspaces.PipelineBase.Invoke(IEnumerable input)
at System.Management.Automation.PowerShell.Worker.ConstructPipelineAndDoWork(Runspace rs, Boolean performSyncInvoke)
at System.Management.Automation.PowerShell.Worker.CreateRunspaceIfNeededAndDoWork(Runspace rsToUse, Boolean isSync)
at System.Management.Automation.PowerShell.CoreInvokeHelper[TInput,TOutput](PSDataCollection`1 input, PSDataCollection`1 output,
PSInvocationSettings settings)
at System.Management.Automation.PowerShell.CoreInvoke[TInput,TOutput](PSDataCollection`1 input, PSDataCollection`1 output,
PSInvocationSettings settings)
at System.Management.Automation.PowerShell.Invoke(IEnumerable input, PSInvocationSettings settings)
at Microsoft.Azure.Commands.Compute.Models.PSSyncOutputEvents.LogMessage(String format, Object[] parameters)
at Microsoft.WindowsAzure.Commands.Sync.Upload.FileMetaData.CalculateMd5Hash(Stream stream, String filePath)
at Microsoft.WindowsAzure.Commands.Sync.Upload.FileMetaData.Create(String filePath)
at Microsoft.Azure.Commands.Compute.Sync.Upload.DiskUploadCreator.get_OperationMetaData()
at Microsoft.Azure.Commands.Compute.Sync.Upload.DiskUploadCreator.get_MD5HashOfLocalVhd()
at Microsoft.Azure.Commands.Compute.Sync.Upload.DiskUploadCreator.Create(FileInfo localVhd, PSPageBlobClient pageblob, Boolean
overWrite)
at Microsoft.Azure.Commands.Compute.StorageServices.AddAzureVhdCommand.b__51_0()
at Microsoft.Azure.Commands.Compute.ComputeClientBaseCmdlet.ExecuteClientAction(Action action)
at Microsoft.WindowsAzure.Commands.Utilities.Common.AzurePSCmdlet.ProcessRecord()
Exception : System.Management.Automation.ParseException
InvocationInfo : {Add-AzVhd}
Position : At line:2 char:1
+ Add-AzVhd -LocalFilePath ""E:\my vms & more\disk ...
+ ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
HistoryId : 20
Message : At line:1 char:79
+ ... sh is being calculated for the file 'E:\my vms & more...
+ ~
The ampersand (&) character is not allowed. The & operator is reserved for future use; wrap an ampersand in double quotation marks
(""&"") to pass it as part of a string.
StackTrace : at System.Management.Automation.Runspaces.PipelineBase.Invoke(IEnumerable input)
at System.Management.Automation.PowerShell.Worker.ConstructPipelineAndDoWork(Runspace rs, Boolean performSyncInvoke)
at System.Management.Automation.PowerShell.Worker.CreateRunspaceIfNeededAndDoWork(Runspace rsToUse, Boolean isSync)
at System.Management.Automation.PowerShell.CoreInvokeHelper[TInput,TOutput](PSDataCollection`1 input, PSDataCollection`1 output,
PSInvocationSettings settings)
at System.Management.Automation.PowerShell.CoreInvoke[TInput,TOutput](PSDataCollection`1 input, PSDataCollection`1 output,
PSInvocationSettings settings)
at System.Management.Automation.PowerShell.Invoke(IEnumerable input, PSInvocationSettings settings)
at Microsoft.Azure.Commands.Compute.Models.PSSyncOutputEvents.LogMessage(String format, Object[] parameters)
at Microsoft.WindowsAzure.Commands.Sync.Upload.FileMetaData.CalculateMd5Hash(Stream stream, String filePath)
at Microsoft.WindowsAzure.Commands.Sync.Upload.FileMetaData.Create(String filePath)
at Microsoft.Azure.Commands.Compute.Sync.Upload.DiskUploadCreator.get_OperationMetaData()
at Microsoft.Azure.Commands.Compute.Sync.Upload.DiskUploadCreator.get_MD5HashOfLocalVhd()
at Microsoft.Azure.Commands.Compute.Sync.Upload.DiskUploadCreator.Create(FileInfo localVhd, PSPageBlobClient pageblob, Boolean
overWrite)
at Microsoft.Azure.Commands.Compute.StorageServices.AddAzureVhdCommand.b__51_0()
at Microsoft.Azure.Commands.Compute.ComputeClientBaseCmdlet.ExecuteClientAction(Action action)
at Microsoft.WindowsAzure.Commands.Utilities.Common.AzurePSCmdlet.ProcessRecord()
Exception : System.Management.Automation.ParseException
InvocationInfo : {Add-AzVhd}
Line : Add-AzVhd -LocalFilePath ""E:\my vms & more\disk-0.vhdx"" -ResourceGroupName RG-PrintScan -Location
australiasoutheast -DiskName dsubmel753_os -DiskSku StandardSSD_LRS
```
",1.0,"Add-AzVHD - Error with path with ampersand (&), even if properly quoted - ### Description
`Add-AzVHD` will fail at the MD5 calculation step if the path to the VHD contains an ampersand, which is a legal path character on Windows.
The problem is here:
https://github.com/Azure/azure-powershell/blob/ebc4710853f09e2e16d33ff479fd2ee9b45a8156/src/Compute/Compute/Models/PSSyncOutputEvents.cs#L108-L116
I'm not sure why it's doing this instead of just calling [WriteProgress](https://docs.microsoft.com/en-us/dotnet/api/system.management.automation.cmdlet.writeprogress), `WriteInformation`, or similar.
There seems to be a lot of what is effectively ""eval""ing in this file and it should probably all be audited for injection vulnerabilities.
### Environment data
```PowerShell
Name Value
---- -----
PSVersion 5.1.20348.320
PSEdition Desktop
PSCompatibleVersions {1.0, 2.0, 3.0, 4.0...}
BuildVersion 10.0.20348.320
CLRVersion 4.0.30319.42000
WSManStackVersion 3.0
PSRemotingProtocolVersion 2.3
SerializationVersion 1.1.0.1
```
### Module versions
```PowerShell
ModuleType Version Name
---------- ------- ----
Script 2.7.1 Az.Accounts
Script 4.23.0 Az.Compute
```
### Error output
```PowerShell
Message : At line:1 char:79
+ ... sh is being calculated for the file 'E:\my vms & more ...
+ ~
The ampersand (&) character is not allowed. The & operator is reserved for future use; wrap an ampersand in double quotation marks
(""&"") to pass it as part of a string.
StackTrace : at System.Management.Automation.Runspaces.PipelineBase.Invoke(IEnumerable input)
at System.Management.Automation.PowerShell.Worker.ConstructPipelineAndDoWork(Runspace rs, Boolean performSyncInvoke)
at System.Management.Automation.PowerShell.Worker.CreateRunspaceIfNeededAndDoWork(Runspace rsToUse, Boolean isSync)
at System.Management.Automation.PowerShell.CoreInvokeHelper[TInput,TOutput](PSDataCollection`1 input, PSDataCollection`1 output,
PSInvocationSettings settings)
at System.Management.Automation.PowerShell.CoreInvoke[TInput,TOutput](PSDataCollection`1 input, PSDataCollection`1 output,
PSInvocationSettings settings)
at System.Management.Automation.PowerShell.Invoke(IEnumerable input, PSInvocationSettings settings)
at Microsoft.Azure.Commands.Compute.Models.PSSyncOutputEvents.LogMessage(String format, Object[] parameters)
at Microsoft.WindowsAzure.Commands.Sync.Upload.FileMetaData.CalculateMd5Hash(Stream stream, String filePath)
at Microsoft.WindowsAzure.Commands.Sync.Upload.FileMetaData.Create(String filePath)
at Microsoft.Azure.Commands.Compute.Sync.Upload.DiskUploadCreator.get_OperationMetaData()
at Microsoft.Azure.Commands.Compute.Sync.Upload.DiskUploadCreator.get_MD5HashOfLocalVhd()
at Microsoft.Azure.Commands.Compute.Sync.Upload.DiskUploadCreator.Create(FileInfo localVhd, PSPageBlobClient pageblob, Boolean
overWrite)
at Microsoft.Azure.Commands.Compute.StorageServices.AddAzureVhdCommand.b__51_0()
at Microsoft.Azure.Commands.Compute.ComputeClientBaseCmdlet.ExecuteClientAction(Action action)
at Microsoft.WindowsAzure.Commands.Utilities.Common.AzurePSCmdlet.ProcessRecord()
Exception : System.Management.Automation.ParseException
InvocationInfo : {Add-AzVhd}
Position : At line:2 char:1
+ Add-AzVhd -LocalFilePath ""E:\my vms & more\disk ...
+ ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
HistoryId : 20
Message : At line:1 char:79
+ ... sh is being calculated for the file 'E:\my vms & more...
+ ~
The ampersand (&) character is not allowed. The & operator is reserved for future use; wrap an ampersand in double quotation marks
(""&"") to pass it as part of a string.
StackTrace : at System.Management.Automation.Runspaces.PipelineBase.Invoke(IEnumerable input)
at System.Management.Automation.PowerShell.Worker.ConstructPipelineAndDoWork(Runspace rs, Boolean performSyncInvoke)
at System.Management.Automation.PowerShell.Worker.CreateRunspaceIfNeededAndDoWork(Runspace rsToUse, Boolean isSync)
at System.Management.Automation.PowerShell.CoreInvokeHelper[TInput,TOutput](PSDataCollection`1 input, PSDataCollection`1 output,
PSInvocationSettings settings)
at System.Management.Automation.PowerShell.CoreInvoke[TInput,TOutput](PSDataCollection`1 input, PSDataCollection`1 output,
PSInvocationSettings settings)
at System.Management.Automation.PowerShell.Invoke(IEnumerable input, PSInvocationSettings settings)
at Microsoft.Azure.Commands.Compute.Models.PSSyncOutputEvents.LogMessage(String format, Object[] parameters)
at Microsoft.WindowsAzure.Commands.Sync.Upload.FileMetaData.CalculateMd5Hash(Stream stream, String filePath)
at Microsoft.WindowsAzure.Commands.Sync.Upload.FileMetaData.Create(String filePath)
at Microsoft.Azure.Commands.Compute.Sync.Upload.DiskUploadCreator.get_OperationMetaData()
at Microsoft.Azure.Commands.Compute.Sync.Upload.DiskUploadCreator.get_MD5HashOfLocalVhd()
at Microsoft.Azure.Commands.Compute.Sync.Upload.DiskUploadCreator.Create(FileInfo localVhd, PSPageBlobClient pageblob, Boolean
overWrite)
at Microsoft.Azure.Commands.Compute.StorageServices.AddAzureVhdCommand.b__51_0()
at Microsoft.Azure.Commands.Compute.ComputeClientBaseCmdlet.ExecuteClientAction(Action action)
at Microsoft.WindowsAzure.Commands.Utilities.Common.AzurePSCmdlet.ProcessRecord()
Exception : System.Management.Automation.ParseException
InvocationInfo : {Add-AzVhd}
Line : Add-AzVhd -LocalFilePath ""E:\my vms & more\disk-0.vhdx"" -ResourceGroupName RG-PrintScan -Location
australiasoutheast -DiskName dsubmel753_os -DiskSku StandardSSD_LRS
```
",0,add azvhd error with path with ampersand even if properly quoted description add azvhd will fail at the calculation step if the path to the vhd contains an ampersand which is a legal path character on windows the problem is here i m not sure why it s doing this instead of just calling writeinformation or similar there seems to be a lot of what is effectively eval ing in this file and it should probably all be audited for injection vulnerabilities environment data powershell name value psversion psedition desktop pscompatibleversions buildversion clrversion wsmanstackversion psremotingprotocolversion serializationversion module versions powershell moduletype version name script az accounts script az compute error output powershell message at line char sh is being calculated for the file e my vms more the ampersand character is not allowed the operator is reserved for future use wrap an ampersand in double quotation marks to pass it as part of a string stacktrace at system management automation runspaces pipelinebase invoke ienumerable input at system management automation powershell worker constructpipelineanddowork runspace rs boolean performsyncinvoke at system management automation powershell worker createrunspaceifneededanddowork runspace rstouse boolean issync at system management automation powershell coreinvokehelper psdatacollection input psdatacollection output psinvocationsettings settings at system management automation powershell coreinvoke psdatacollection input psdatacollection output psinvocationsettings settings at system management automation powershell invoke ienumerable input psinvocationsettings settings at microsoft azure commands compute models pssyncoutputevents logmessage string format object parameters at microsoft windowsazure commands sync upload filemetadata stream stream string filepath at microsoft windowsazure commands sync upload filemetadata create string filepath at microsoft azure commands compute sync upload diskuploadcreator get operationmetadata at microsoft azure commands compute sync upload diskuploadcreator get at microsoft azure commands compute sync upload diskuploadcreator create fileinfo localvhd pspageblobclient pageblob boolean overwrite at microsoft azure commands compute storageservices addazurevhdcommand b at microsoft azure commands compute computeclientbasecmdlet executeclientaction action action at microsoft windowsazure commands utilities common azurepscmdlet processrecord exception system management automation parseexception invocationinfo add azvhd position at line char add azvhd localfilepath e my vms more disk historyid message at line char sh is being calculated for the file e my vms more the ampersand character is not allowed the operator is reserved for future use wrap an ampersand in double quotation marks to pass it as part of a string stacktrace at system management automation runspaces pipelinebase invoke ienumerable input at system management automation powershell worker constructpipelineanddowork runspace rs boolean performsyncinvoke at system management automation powershell worker createrunspaceifneededanddowork runspace rstouse boolean issync at system management automation powershell coreinvokehelper psdatacollection input psdatacollection output psinvocationsettings settings at system management automation powershell coreinvoke psdatacollection input psdatacollection output psinvocationsettings settings at system management automation powershell invoke ienumerable input psinvocationsettings settings at microsoft azure commands compute models pssyncoutputevents logmessage string format object parameters at microsoft windowsazure commands sync upload filemetadata stream stream string filepath at microsoft windowsazure commands sync upload filemetadata create string filepath at microsoft azure commands compute sync upload diskuploadcreator get operationmetadata at microsoft azure commands compute sync upload diskuploadcreator get at microsoft azure commands compute sync upload diskuploadcreator create fileinfo localvhd pspageblobclient pageblob boolean overwrite at microsoft azure commands compute storageservices addazurevhdcommand b at microsoft azure commands compute computeclientbasecmdlet executeclientaction action action at microsoft windowsazure commands utilities common azurepscmdlet processrecord exception system management automation parseexception invocationinfo add azvhd line add azvhd localfilepath e my vms more disk vhdx resourcegroupname rg printscan location australiasoutheast diskname os disksku standardssd lrs ,0
2440,11962793398.0,IssuesEvent,2020-04-05 13:45:08,BuildingCityDashboards/bcd-dd-v2.1,https://api.github.com/repos/BuildingCityDashboards/bcd-dd-v2.1,opened,Add blob storage,automation enhancement,"Move static data documents to blob storage e.g. all csvs and json.
This should be linked to a DB that serves as a lookup. Refs can be changed to reflect latest version of the document e.g. with update figures.
Need to define which data will be stored (S3 is cheap, so don't be conservative) and which will remain as a call to an external API.
Realtime data being archived should also be persisted to blob storage as a source of truth.
Related #126 #638 #209 #207 ",1.0,"Add blob storage - Move static data documents to blob storage e.g. all csvs and json.
This should be linked to a DB that serves as a lookup. Refs can be changed to reflect latest version of the document e.g. with update figures.
Need to define which data will be stored (S3 is cheap, so don't be conservative) and which will remain as a call to an external API.
Realtime data being archived should also be persisted to blob storage as a source of truth.
Related #126 #638 #209 #207 ",1,add blob storage move static data documents to blob storage e g all csvs and json this should be linked to a db that serves as a lookup refs can be changed to reflect latest version of the document e g with update figures need to define which data will be stored is cheap so don t be conservative and which will remain as a call to an external api realtime data being archived should also be persisted to blob storage as a source of truth related ,1
106678,23265828431.0,IssuesEvent,2022-08-04 17:16:29,objectos/objectos,https://api.github.com/repos/objectos/objectos,reopened,AsciiDoc: support inline macros,t:feature c:code a:objectos-asciidoc,"## Test cases
- [x] tc01: well formed https
- [x] tc02: not an inline macro (rollback)",1.0,"AsciiDoc: support inline macros - ## Test cases
- [x] tc01: well formed https
- [x] tc02: not an inline macro (rollback)",0,asciidoc support inline macros test cases well formed https not an inline macro rollback ,0
1564,10343118877.0,IssuesEvent,2019-09-04 08:15:33,a-t-0/Taskwarrior-installation,https://api.github.com/repos/a-t-0/Taskwarrior-installation,opened,"Change the way the arguments are read, when running from a cronjob",Automation bug,"When running a cronjob, it can be difficult to pass all the arguments to the `javaServerSort.jar` file. As a solution, you can either:
1. Put all the arguments between `""""` quotation marks to put them in a single string element, as currently is the case in the way the args are read.
2. Put them all separately after the `java -jar JavaServerSort.jar -argName0 -argValue0 -argName1 -argValue1..` command.
3. After the arguments are passed to the `JavaServerSort.jar` file the first time, create a config file that contains the arguments which is checked for existence, before checking the incoming input arguments.",1.0,"Change the way the arguments are read, when running from a cronjob - When running a cronjob, it can be difficult to pass all the arguments to the `javaServerSort.jar` file. As a solution, you can either:
1. Put all the arguments between `""""` quotation marks to put them in a single string element, as currently is the case in the way the args are read.
2. Put them all separately after the `java -jar JavaServerSort.jar -argName0 -argValue0 -argName1 -argValue1..` command.
3. After the arguments are passed to the `JavaServerSort.jar` file the first time, create a config file that contains the arguments which is checked for existence, before checking the incoming input arguments.",1,change the way the arguments are read when running from a cronjob when running a cronjob it can be difficult to pass all the arguments to the javaserversort jar file as a solution you can either put all the arguments between quotation marks to put them in a single string element as currently is the case in the way the args are read put them all separately after the java jar javaserversort jar command after the arguments are passed to the javaserversort jar file the first time create a config file that contains the arguments which is checked for existence before checking the incoming input arguments ,1
62762,8639383533.0,IssuesEvent,2018-11-23 18:41:26,erlang/rebar3,https://api.github.com/repos/erlang/rebar3,closed,Document rebar_compiler behaviour,documentation,"The new rebar_compiler behaviour is the recommended way to teach rebar to compile new file types, but use of the behaviour is non-obvious.
It would be great to get some documentation (such as comments on the behaviour callbacks https://github.com/erlang/rebar3/blob/311ee6b1371c3eea3611dc5d7945b1b5667c75bd/src/rebar_compiler.erl#L17-L24) so that it's clearer how to make use of it :)",1.0,"Document rebar_compiler behaviour - The new rebar_compiler behaviour is the recommended way to teach rebar to compile new file types, but use of the behaviour is non-obvious.
It would be great to get some documentation (such as comments on the behaviour callbacks https://github.com/erlang/rebar3/blob/311ee6b1371c3eea3611dc5d7945b1b5667c75bd/src/rebar_compiler.erl#L17-L24) so that it's clearer how to make use of it :)",0,document rebar compiler behaviour the new rebar compiler behaviour is the recommended way to teach rebar to compile new file types but use of the behaviour is non obvious it would be great to get some documentation such as comments on the behaviour callbacks so that it s clearer how to make use of it ,0
18661,5683750037.0,IssuesEvent,2017-04-13 13:31:10,fabric8io/fabric8-ux,https://api.github.com/repos/fabric8io/fabric8-ux,opened,Hover state for iteration side panel,code,Add a hover state on the iteration side panel and submit PR,1.0,Hover state for iteration side panel - Add a hover state on the iteration side panel and submit PR,0,hover state for iteration side panel add a hover state on the iteration side panel and submit pr,0
3712,14403345024.0,IssuesEvent,2020-12-03 15:56:54,keptn/keptn,https://api.github.com/repos/keptn/keptn,closed,Only run integration tests on travis for nightlies to save some build credits,automation type:chore,"Right now integration tests will run every time we push something to master (e.g., when merging PRs).
This consumes an abnormal amount of credits on travis-ci, and we need to save them.
As a quick fix, we can disable all integration tests to only run once per day (e.g., nightlies).",1.0,"Only run integration tests on travis for nightlies to save some build credits - Right now integration tests will run every time we push something to master (e.g., when merging PRs).
This consumes an abnormal amount of credits on travis-ci, and we need to save them.
As a quick fix, we can disable all integration tests to only run once per day (e.g., nightlies).",1,only run integration tests on travis for nightlies to save some build credits right now integration tests will run every time we push something to master e g when merging prs this consumes an abnormal amount of credits on travis ci and we need to save them as a quick fix we can disable all integration tests to only run once per day e g nightlies ,1
430866,30204303712.0,IssuesEvent,2023-07-05 08:23:42,PaloAltoNetworks/pan.dev,https://api.github.com/repos/PaloAltoNetworks/pan.dev,opened,"Issue/Help with ""List Host Findings""",documentation,"## Documentation link
https://pan.dev/prisma-cloud/api/cspm/get-host-findings/#list-host-findings
## Describe the problem
There is an issue with this page :
- Typo : The response is included within the block next to """"An example request body for all finding types is:""
- Behaviour : I cannot use this endpoint, I get a 400 Bad Request HTTP Error with no payload. We need a clarification how to use this API.
## Suggested fix
- Correct the documentation : give the expected answer and clarify responses of this API endpoint
",1.0,"Issue/Help with ""List Host Findings"" - ## Documentation link
https://pan.dev/prisma-cloud/api/cspm/get-host-findings/#list-host-findings
## Describe the problem
There is an issue with this page :
- Typo : The response is included within the block next to """"An example request body for all finding types is:""
- Behaviour : I cannot use this endpoint, I get a 400 Bad Request HTTP Error with no payload. We need a clarification how to use this API.
## Suggested fix
- Correct the documentation : give the expected answer and clarify responses of this API endpoint
",0,issue help with list host findings documentation link describe the problem there is an issue with this page typo the response is included within the block next to an example request body for all finding types is behaviour i cannot use this endpoint i get a bad request http error with no payload we need a clarification how to use this api suggested fix correct the documentation give the expected answer and clarify responses of this api endpoint ,0
2361,11825240745.0,IssuesEvent,2020-03-21 11:42:55,tajmone/hugo-book,https://api.github.com/repos/tajmone/hugo-book,closed,Enable Images Previews in Coalesced AsciiDoc Book,:bulb: enhancement :hammer: Travis CI :star: assets :star: automation :star: images,"- [x] Edit [`docs_src/hugo-book.asciidoc`][hugo-book.asciidoc]:
+ [x] Define `imagesdir` attr. via conditional preprocessor directives so that images are viewable in GitHub's WebUI, in both the sources inside `docs_src/` folder as well as in the [standalone AsciiDoc version].
- [x] Edit [`docs_src/build.sh`][build.sh]:
+ [x] After creating the [standalone AsciiDoc version], test that it would convert to HTML without errors — i.e. run it through Asciidoctor redirecting output to `>/dev/null`, using `--failure-level WARN`, so that failure to find the required images would fail the build on Travis CI.
- [x] Manually verify that all conversion scripts are producing correct output, and that images are displayed as expected:
+ [x] [`docs_src/build.sh`][build.sh]:
* [x] `docs/index.html`
* [x] `hugo-book.html`
* [x] `hugo-book.asciidoc `
+ [x] [`docs_src/preview.sh`][preview.sh]:
* [x] `docs_src/preview.html`
- [x] Document above changes:
+ [x] Main `README.md` Changelog.
-------------------------------------------------------------------------------
Currently, previewing the [standalone AsciiDoc version] on GitHub doesn't show the images diagrams. This should be easily fixable by adding the right attribute in the header, providing the relative path to find the images.
When I initially worked on the AsciiDoc sources, I ensured that previewing each chapter source on GitHub's WebUI would correctly show the diagrams. At the time I didn't consider that I would be adding also a standalone coalesced version of the document.
If possible, it would be great if the diagrams could be shown correctly both in the standalone version as well as in the single chapters sources. But this might either be not achievable (due to limitations in the GitHub previewer or the AsciiDoc Coalescer), or require too complex hacks; in this case I should give precedence to the standalone AsciiDoc version over the single sources.
### References
- [Asciidoctor Manual]:
+ [§29.1. Setting the Location of Images]
+ [§48. Conditional Preprocessor Directives]
[standalone AsciiDoc version]: ../blob/master/hugo-book.asciidoc ""hugo-book.asciidoc""
[hugo-book.asciidoc]: ../blob/master/docs_src/hugo-book.asciidoc ""View source file""
[build.sh]: ../blob/master/docs_src/build.sh ""View source file""
[preview.sh]: ../blob/master/docs_src/preview.sh ""View source file""
[Asciidoctor Manual]: https://asciidoctor.org/docs/user-manual/#setting-the-location-of-images
[§29.1. Setting the Location of Images]: https://asciidoctor.org/docs/user-manual/#setting-the-location-of-images
[§48. Conditional Preprocessor Directives]: https://asciidoctor.org/docs/user-manual/#conditional-preprocessor-directives
",1.0,"Enable Images Previews in Coalesced AsciiDoc Book - - [x] Edit [`docs_src/hugo-book.asciidoc`][hugo-book.asciidoc]:
+ [x] Define `imagesdir` attr. via conditional preprocessor directives so that images are viewable in GitHub's WebUI, in both the sources inside `docs_src/` folder as well as in the [standalone AsciiDoc version].
- [x] Edit [`docs_src/build.sh`][build.sh]:
+ [x] After creating the [standalone AsciiDoc version], test that it would convert to HTML without errors — i.e. run it through Asciidoctor redirecting output to `>/dev/null`, using `--failure-level WARN`, so that failure to find the required images would fail the build on Travis CI.
- [x] Manually verify that all conversion scripts are producing correct output, and that images are displayed as expected:
+ [x] [`docs_src/build.sh`][build.sh]:
* [x] `docs/index.html`
* [x] `hugo-book.html`
* [x] `hugo-book.asciidoc `
+ [x] [`docs_src/preview.sh`][preview.sh]:
* [x] `docs_src/preview.html`
- [x] Document above changes:
+ [x] Main `README.md` Changelog.
-------------------------------------------------------------------------------
Currently, previewing the [standalone AsciiDoc version] on GitHub doesn't show the images diagrams. This should be easily fixable by adding the right attribute in the header, providing the relative path to find the images.
When I initially worked on the AsciiDoc sources, I ensured that previewing each chapter source on GitHub's WebUI would correctly show the diagrams. At the time I didn't consider that I would be adding also a standalone coalesced version of the document.
If possible, it would be great if the diagrams could be shown correctly both in the standalone version as well as in the single chapters sources. But this might either be not achievable (due to limitations in the GitHub previewer or the AsciiDoc Coalescer), or require too complex hacks; in this case I should give precedence to the standalone AsciiDoc version over the single sources.
### References
- [Asciidoctor Manual]:
+ [§29.1. Setting the Location of Images]
+ [§48. Conditional Preprocessor Directives]
[standalone AsciiDoc version]: ../blob/master/hugo-book.asciidoc ""hugo-book.asciidoc""
[hugo-book.asciidoc]: ../blob/master/docs_src/hugo-book.asciidoc ""View source file""
[build.sh]: ../blob/master/docs_src/build.sh ""View source file""
[preview.sh]: ../blob/master/docs_src/preview.sh ""View source file""
[Asciidoctor Manual]: https://asciidoctor.org/docs/user-manual/#setting-the-location-of-images
[§29.1. Setting the Location of Images]: https://asciidoctor.org/docs/user-manual/#setting-the-location-of-images
[§48. Conditional Preprocessor Directives]: https://asciidoctor.org/docs/user-manual/#conditional-preprocessor-directives
",1,enable images previews in coalesced asciidoc book edit define imagesdir attr via conditional preprocessor directives so that images are viewable in github s webui in both the sources inside docs src folder as well as in the edit after creating the test that it would convert to html without errors — i e run it through asciidoctor redirecting output to dev null using failure level warn so that failure to find the required images would fail the build on travis ci manually verify that all conversion scripts are producing correct output and that images are displayed as expected docs index html hugo book html hugo book asciidoc docs src preview html document above changes main readme md changelog currently previewing the on github doesn t show the images diagrams this should be easily fixable by adding the right attribute in the header providing the relative path to find the images when i initially worked on the asciidoc sources i ensured that previewing each chapter source on github s webui would correctly show the diagrams at the time i didn t consider that i would be adding also a standalone coalesced version of the document if possible it would be great if the diagrams could be shown correctly both in the standalone version as well as in the single chapters sources but this might either be not achievable due to limitations in the github previewer or the asciidoc coalescer or require too complex hacks in this case i should give precedence to the standalone asciidoc version over the single sources references reference links blob master hugo book asciidoc hugo book asciidoc blob master docs src hugo book asciidoc view source file blob master docs src build sh view source file blob master docs src preview sh view source file ,1
174380,27630826059.0,IssuesEvent,2023-03-10 10:41:58,Regalis11/Barotrauma,https://api.github.com/repos/Regalis11/Barotrauma,closed,Forbidden word list not checking descriptions,Design,"### Disclaimers
- [X] I have searched the issue tracker to check if the issue has already been reported.
- [ ] My issue happened while using mods.
### What happened?
I've searched for others with this issue, no one seems to have commented on this.
As the title says, if you try blocking words with the forbiddenwordlist.txt it wont hide servers like it should. though if the servers name has a forbidden word in it, then it will block it.
(Small issue)
### Reproduction steps
1. Enable hide forbidden words
2. See servers with horrible words
### Bug prevalence
Happens every time I play
### Version
0.20.16.1
### -
_No response_
### Which operating system did you encounter this bug on?
Windows
### Relevant error messages and crash reports
_No response_",1.0,"Forbidden word list not checking descriptions - ### Disclaimers
- [X] I have searched the issue tracker to check if the issue has already been reported.
- [ ] My issue happened while using mods.
### What happened?
I've searched for others with this issue, no one seems to have commented on this.
As the title says, if you try blocking words with the forbiddenwordlist.txt it wont hide servers like it should. though if the servers name has a forbidden word in it, then it will block it.
(Small issue)
### Reproduction steps
1. Enable hide forbidden words
2. See servers with horrible words
### Bug prevalence
Happens every time I play
### Version
0.20.16.1
### -
_No response_
### Which operating system did you encounter this bug on?
Windows
### Relevant error messages and crash reports
_No response_",0,forbidden word list not checking descriptions disclaimers i have searched the issue tracker to check if the issue has already been reported my issue happened while using mods what happened i ve searched for others with this issue no one seems to have commented on this as the title says if you try blocking words with the forbiddenwordlist txt it wont hide servers like it should though if the servers name has a forbidden word in it then it will block it small issue reproduction steps enable hide forbidden words see servers with horrible words bug prevalence happens every time i play version no response which operating system did you encounter this bug on windows relevant error messages and crash reports no response ,0
691392,23695616386.0,IssuesEvent,2022-08-29 14:25:36,Adyen/adyen-magento2,https://api.github.com/repos/Adyen/adyen-magento2,closed,[PW-5393] sales_order_payment table size keeps growing,Enhancement Priority: medium Confirmed,"**Is your feature request related to a problem? Please describe.**
As far as our e-commerce grows, we have noticed that `sales_order_payment` table is the largest one on our e-commerce platform.
After taking a look deeper in which fields contains a large amount of data, we have noticed that Adyen module is storing a lot of data on `additional_information` column.
**Describe the solution you'd like**
After placing payment, we may don't need anymore whole data which is stored on `additional_information` column. Most important data is PSP Reference and it is stored in a separate column.",1.0,"[PW-5393] sales_order_payment table size keeps growing - **Is your feature request related to a problem? Please describe.**
As far as our e-commerce grows, we have noticed that `sales_order_payment` table is the largest one on our e-commerce platform.
After taking a look deeper in which fields contains a large amount of data, we have noticed that Adyen module is storing a lot of data on `additional_information` column.
**Describe the solution you'd like**
After placing payment, we may don't need anymore whole data which is stored on `additional_information` column. Most important data is PSP Reference and it is stored in a separate column.",0, sales order payment table size keeps growing is your feature request related to a problem please describe as far as our e commerce grows we have noticed that sales order payment table is the largest one on our e commerce platform after taking a look deeper in which fields contains a large amount of data we have noticed that adyen module is storing a lot of data on additional information column describe the solution you d like after placing payment we may don t need anymore whole data which is stored on additional information column most important data is psp reference and it is stored in a separate column ,0
22923,4858108475.0,IssuesEvent,2016-11-12 23:35:12,n8rzz/atc,https://api.github.com/repos/n8rzz/atc,closed,Document git flow process,documentation,"- [ ] gh-pages, how/when code gets there
- [ ] release/x.x.x
- [ ] develop
- [ ] feature/ATC-xxx where to branch from and where to target
- [ ] bugfix/ATC-xxx where to branch from and where to target",1.0,"Document git flow process - - [ ] gh-pages, how/when code gets there
- [ ] release/x.x.x
- [ ] develop
- [ ] feature/ATC-xxx where to branch from and where to target
- [ ] bugfix/ATC-xxx where to branch from and where to target",0,document git flow process gh pages how when code gets there release x x x develop feature atc xxx where to branch from and where to target bugfix atc xxx where to branch from and where to target,0
2520,12177870524.0,IssuesEvent,2020-04-28 08:03:34,elastic/apm-integration-testing,https://api.github.com/repos/elastic/apm-integration-testing,closed,Build APM Server OSS for the OSS test,[zube]: In Progress automation bug,"After this change https://github.com/elastic/apm-server/commit/7520b723ceb424c9338b231e5ecf83779877398e#diff-b67911656ef5d18c4ae36cb6741b7965R35-R37 the default version we build is the x-pack version, this makes that the ITs fail in the OSS test.",1.0,"Build APM Server OSS for the OSS test - After this change https://github.com/elastic/apm-server/commit/7520b723ceb424c9338b231e5ecf83779877398e#diff-b67911656ef5d18c4ae36cb6741b7965R35-R37 the default version we build is the x-pack version, this makes that the ITs fail in the OSS test.",1,build apm server oss for the oss test after this change the default version we build is the x pack version this makes that the its fail in the oss test ,1
777186,27270904679.0,IssuesEvent,2023-02-22 22:14:21,webcompat/web-bugs,https://api.github.com/repos/webcompat/web-bugs,closed,beta.character.ai - Account login is not performed,browser-firefox-mobile priority-normal severity-critical browser-fenix engine-gecko diagnosis-priority-p1 trend-login,"
**URL**: https://beta.character.ai
**Browser / Version**: Firefox Mobile 108.0
**Operating System**: Android 10
**Tested Another Browser**: Yes Other
**Problem type**: Site is not usable
**Description**: Unable to login
**Steps to Reproduce**:
First, I entered the website and clicked the ""log in"" button on the top right corner of the page.
Then, I entered my email and password. Seemingly, the site accepted them as valid.
However, after the site's main page reloaded as a result of logging in, the site acted as if I still weren't logged in at all.
In fact, every time I ty to pick an AI, the site simply asks me to log in again.
This process loops continuously and thus, I can't access my account or any of the site's content.
I can confirm that the website works correctly on Firefox for PC.
Browser Configuration
gfx.webrender.all: false
gfx.webrender.blob-images: true
gfx.webrender.enabled: false
image.mem.shared: true
buildID: 20221020093353
channel: nightly
hasTouchScreen: true
mixed active content blocked: false
mixed passive content blocked: false
tracking content blocked: false
[View console log messages](https://webcompat.com/console_logs/2022/10/224dcaf3-1e82-4697-a328-59d035de0aa2)
_From [webcompat.com](https://webcompat.com/) with ❤️_",2.0,"beta.character.ai - Account login is not performed -
**URL**: https://beta.character.ai
**Browser / Version**: Firefox Mobile 108.0
**Operating System**: Android 10
**Tested Another Browser**: Yes Other
**Problem type**: Site is not usable
**Description**: Unable to login
**Steps to Reproduce**:
First, I entered the website and clicked the ""log in"" button on the top right corner of the page.
Then, I entered my email and password. Seemingly, the site accepted them as valid.
However, after the site's main page reloaded as a result of logging in, the site acted as if I still weren't logged in at all.
In fact, every time I ty to pick an AI, the site simply asks me to log in again.
This process loops continuously and thus, I can't access my account or any of the site's content.
I can confirm that the website works correctly on Firefox for PC.
Browser Configuration
gfx.webrender.all: false
gfx.webrender.blob-images: true
gfx.webrender.enabled: false
image.mem.shared: true
buildID: 20221020093353
channel: nightly
hasTouchScreen: true
mixed active content blocked: false
mixed passive content blocked: false
tracking content blocked: false
[View console log messages](https://webcompat.com/console_logs/2022/10/224dcaf3-1e82-4697-a328-59d035de0aa2)
_From [webcompat.com](https://webcompat.com/) with ❤️_",0,beta character ai account login is not performed url browser version firefox mobile operating system android tested another browser yes other problem type site is not usable description unable to login steps to reproduce first i entered the website and clicked the log in button on the top right corner of the page then i entered my email and password seemingly the site accepted them as valid however after the site s main page reloaded as a result of logging in the site acted as if i still weren t logged in at all in fact every time i ty to pick an ai the site simply asks me to log in again this process loops continuously and thus i can t access my account or any of the site s content i can confirm that the website works correctly on firefox for pc browser configuration gfx webrender all false gfx webrender blob images true gfx webrender enabled false image mem shared true buildid channel nightly hastouchscreen true mixed active content blocked false mixed passive content blocked false tracking content blocked false from with ❤️ ,0
818,8211503200.0,IssuesEvent,2018-09-04 13:59:26,DevExpress/testcafe,https://api.github.com/repos/DevExpress/testcafe,closed," ""The element that matches the specified selector is not visible"" error on attempt to drag visible element",AREA: client SYSTEM: automations TYPE: bug,"### Are you requesting a feature or reporting a bug?
bug
### What is the current behavior?
Trying to drag ""seekbar knob"" selector raising: ""The element that matches the specified selector is not visible"" error
### What is the expected behavior?
Should be able to perform drag operation
### How would you reproduce the current behavior (if this is a bug)?
#### Provide the test code and the tested page URL (if applicable)
Tested page URL: https://kabbalahmedia.info/en/lessons/cu/iIqZE7y7?language=en
Test code
```js
test('timeCodeUpdateByDrag', async t => {
const getSeekbarRect = ClientFunction((selector) => {
const {top, left, bottom, right} = document.querySelector(selector).getBoundingClientRect();
return {top, left, bottom, right};
});
await player_utils.waitForPlayerToLoad();
let rect = await getSeekbarRect('.seekbar__knob');
console.debug(""Rect >> top: "" + rect.top + "" left: "" + rect.left + "" bottom: "" + rect.bottom +
"" right: "" + rect.right);
let current_mouse_x = rect.left + ((rect.right - rect.left) / 2);
let current_mouse_y = rect.top + ((rect.top - rect.bottom) / 2);
const seekbarSelector = await Selector('.seekbar__knob');
await t.drag(seekbarSelector, current_mouse_x + 100, parseInt(current_mouse_y));
});
```
### Specify your
* operating system: MacOS HighSierra
* testcafe version:0.21.1
* node.js version:9.8",1.0," ""The element that matches the specified selector is not visible"" error on attempt to drag visible element - ### Are you requesting a feature or reporting a bug?
bug
### What is the current behavior?
Trying to drag ""seekbar knob"" selector raising: ""The element that matches the specified selector is not visible"" error
### What is the expected behavior?
Should be able to perform drag operation
### How would you reproduce the current behavior (if this is a bug)?
#### Provide the test code and the tested page URL (if applicable)
Tested page URL: https://kabbalahmedia.info/en/lessons/cu/iIqZE7y7?language=en
Test code
```js
test('timeCodeUpdateByDrag', async t => {
const getSeekbarRect = ClientFunction((selector) => {
const {top, left, bottom, right} = document.querySelector(selector).getBoundingClientRect();
return {top, left, bottom, right};
});
await player_utils.waitForPlayerToLoad();
let rect = await getSeekbarRect('.seekbar__knob');
console.debug(""Rect >> top: "" + rect.top + "" left: "" + rect.left + "" bottom: "" + rect.bottom +
"" right: "" + rect.right);
let current_mouse_x = rect.left + ((rect.right - rect.left) / 2);
let current_mouse_y = rect.top + ((rect.top - rect.bottom) / 2);
const seekbarSelector = await Selector('.seekbar__knob');
await t.drag(seekbarSelector, current_mouse_x + 100, parseInt(current_mouse_y));
});
```
### Specify your
* operating system: MacOS HighSierra
* testcafe version:0.21.1
* node.js version:9.8",1, the element that matches the specified selector is not visible error on attempt to drag visible element are you requesting a feature or reporting a bug bug what is the current behavior trying to drag seekbar knob selector raising the element that matches the specified selector is not visible error what is the expected behavior should be able to perform drag operation how would you reproduce the current behavior if this is a bug provide the test code and the tested page url if applicable tested page url test code js test timecodeupdatebydrag async t const getseekbarrect clientfunction selector const top left bottom right document queryselector selector getboundingclientrect return top left bottom right await player utils waitforplayertoload let rect await getseekbarrect seekbar knob console debug rect top rect top left rect left bottom rect bottom right rect right let current mouse x rect left rect right rect left let current mouse y rect top rect top rect bottom const seekbarselector await selector seekbar knob await t drag seekbarselector current mouse x parseint current mouse y specify your operating system macos highsierra testcafe version node js version ,1
5132,18717583808.0,IssuesEvent,2021-11-03 07:55:45,extratone/extratone,https://api.github.com/repos/extratone/extratone,opened,Join us Day 2 Into Focus at Microsoft Ignite! [(Open email in Spark)](readdlespark://bl=QTphc3BoYWx0YXBvc3RsZUBpY2xvdWQuY29tO0lEOnQycXNodWhGUW9pWjBpRThw%0D%0AWW04VmdAZ2VvcG9kLWlzbXRwZC02LTI7MzkzMjc4ODk3OQ%3D%3D),automation,03-Nov-2021 07:51:16 - 2274454577 -,1.0,Join us Day 2 Into Focus at Microsoft Ignite! [(Open email in Spark)](readdlespark://bl=QTphc3BoYWx0YXBvc3RsZUBpY2xvdWQuY29tO0lEOnQycXNodWhGUW9pWjBpRThw%0D%0AWW04VmdAZ2VvcG9kLWlzbXRwZC02LTI7MzkzMjc4ODk3OQ%3D%3D) - 03-Nov-2021 07:51:16 - 2274454577 -,1,join us day into focus at microsoft ignite readdlespark bl nov ,1
5686,20750088142.0,IssuesEvent,2022-03-15 06:15:26,EthanThatOneKid/acmcsuf.com,https://api.github.com/repos/EthanThatOneKid/acmcsuf.com,closed,[OFFICER_AUTOMATION],automation:officer,"### >>Officer Name<<
Angel Armendariz
### >>Term to Overwrite<<
Spring 2022
### >>Overwrite Officer Position Title<<
Dev Project Manager
### >>Overwrite Officer Position Tier<<
Dev Project Manager
### >>Overwrite Officer Picture<<

### >>Overwrite Officer GitHub Username<<
Angel-Armendariz",1.0,"[OFFICER_AUTOMATION] - ### >>Officer Name<<
Angel Armendariz
### >>Term to Overwrite<<
Spring 2022
### >>Overwrite Officer Position Title<<
Dev Project Manager
### >>Overwrite Officer Position Tier<<
Dev Project Manager
### >>Overwrite Officer Picture<<

### >>Overwrite Officer GitHub Username<<
Angel-Armendariz",1, officer name angel armendariz term to overwrite spring overwrite officer position title dev project manager overwrite officer position tier dev project manager overwrite officer picture overwrite officer github username angel armendariz,1
5652,20608213674.0,IssuesEvent,2022-03-07 04:37:11,Studio-Ops-Org/Studio-2022-S1-Repo,https://api.github.com/repos/Studio-Ops-Org/Studio-2022-S1-Repo,opened,Develop ARM templates for OE1,Automation,"Create an editable ARM template that can be used for OE1. This should do the following:
- [ ] Deploy x amount of Linux VMs using B1ls
- [ ] All VMs be in the same VNET, subnet and resource group
- [ ] Unique public IP addresses for each machine
- [ ] NSG rules that allow access only from polytech
- [ ] Management username as password (sudo access)
- [ ] Student account (no authority) -> unsure if this can be done via ARM template
What are ARM templates? [This will help](https://www.varonis.com/blog/arm-template#:~:text=deleting%20Azure%20resources.-,What%20are%20ARM%20templates%3F,how%20the%20resources%20are%20created.)
Need some Azure documentation? [Here you go!](https://docs.microsoft.com/en-us/azure/azure-resource-manager/templates/overview)",1.0,"Develop ARM templates for OE1 - Create an editable ARM template that can be used for OE1. This should do the following:
- [ ] Deploy x amount of Linux VMs using B1ls
- [ ] All VMs be in the same VNET, subnet and resource group
- [ ] Unique public IP addresses for each machine
- [ ] NSG rules that allow access only from polytech
- [ ] Management username as password (sudo access)
- [ ] Student account (no authority) -> unsure if this can be done via ARM template
What are ARM templates? [This will help](https://www.varonis.com/blog/arm-template#:~:text=deleting%20Azure%20resources.-,What%20are%20ARM%20templates%3F,how%20the%20resources%20are%20created.)
Need some Azure documentation? [Here you go!](https://docs.microsoft.com/en-us/azure/azure-resource-manager/templates/overview)",1,develop arm templates for create an editable arm template that can be used for this should do the following deploy x amount of linux vms using all vms be in the same vnet subnet and resource group unique public ip addresses for each machine nsg rules that allow access only from polytech management username as password sudo access student account no authority unsure if this can be done via arm template what are arm templates need some azure documentation ,1
349367,10468028011.0,IssuesEvent,2019-09-22 10:42:09,googleapis/google-cloud-ruby,https://api.github.com/repos/googleapis/google-cloud-ruby,closed,Synthesis failed for redis,api: redis autosynth failure priority: p1 type: bug,"Hello! Autosynth couldn't regenerate redis. :broken_heart:
Here's the output from running `synth.py`:
```
Cloning into 'working_repo'...
Switched to branch 'autosynth-redis'
Running synthtool
['/tmpfs/src/git/autosynth/env/bin/python3', '-m', 'synthtool', 'synth.py', '--']
synthtool > Executing /tmpfs/src/git/autosynth/working_repo/google-cloud-redis/synth.py.
synthtool > Ensuring dependencies.
synthtool > Pulling artman image.
latest: Pulling from googleapis/artman
Digest: sha256:66ca01f27ef7dc50fbfb7743b67028115a6a8acf43b2d82f9fc826de008adac4
Status: Image is up to date for googleapis/artman:latest
synthtool > Cloning googleapis.
synthtool > Running generator for google/cloud/redis/artman_redis_v1.yaml.
synthtool > Failed executing docker run --name artman-docker --rm -i -e HOST_USER_ID=1000 -e HOST_GROUP_ID=1000 -e RUNNING_IN_ARTMAN_DOCKER=True -v /home/kbuilder/.cache/synthtool/googleapis:/home/kbuilder/.cache/synthtool/googleapis -v /home/kbuilder/.cache/synthtool/googleapis/artman-genfiles:/home/kbuilder/.cache/synthtool/googleapis/artman-genfiles -w /home/kbuilder/.cache/synthtool/googleapis googleapis/artman:latest /bin/bash -c artman --local --config google/cloud/redis/artman_redis_v1.yaml generate ruby_gapic:
artman> Final args:
artman> api_name: redis
artman> api_version: v1
artman> artifact_type: GAPIC
artman> aspect: ALL
artman> gapic_code_dir: /home/kbuilder/.cache/synthtool/googleapis/artman-genfiles/ruby/google-cloud-ruby/google-cloud-redis
artman> gapic_yaml: /home/kbuilder/.cache/synthtool/googleapis/google/cloud/redis/v1/redis_gapic.yaml
artman> generator_args: null
artman> import_proto_path:
artman> - /home/kbuilder/.cache/synthtool/googleapis
artman> language: ruby
artman> organization_name: google-cloud
artman> output_dir: /home/kbuilder/.cache/synthtool/googleapis/artman-genfiles
artman> proto_deps:
artman> - name: google-common-protos
artman> proto_package: ''
artman> root_dir: /home/kbuilder/.cache/synthtool/googleapis
artman> samples: ''
artman> service_yaml: /home/kbuilder/.cache/synthtool/googleapis/google/cloud/redis/redis_v1.yaml
artman> src_proto_path:
artman> - /home/kbuilder/.cache/synthtool/googleapis/google/cloud/redis/v1
artman> toolkit_path: /toolkit
artman>
artman> Creating GapicClientPipeline.
artman.output >
WARNING: toplevel: (lint) control-presence: Service redis.googleapis.com does not have control environment configured.
ERROR: toplevel: Reference to unknown type ""locations.googleapis.com/Location"" on field google.cloud.redis.v1.ListInstancesRequest.google.cloud.redis.v1.ListInstancesRequest.parent
ERROR: toplevel: Reference to unknown type ""locations.googleapis.com/Location"" on field google.cloud.redis.v1.CreateInstanceRequest.google.cloud.redis.v1.CreateInstanceRequest.parent
WARNING: toplevel: (lint) control-presence: Service redis.googleapis.com does not have control environment configured.
ERROR: toplevel: Reference to unknown type ""locations.googleapis.com/Location"" on field google.cloud.redis.v1.ListInstancesRequest.google.cloud.redis.v1.ListInstancesRequest.parent
ERROR: toplevel: Reference to unknown type ""locations.googleapis.com/Location"" on field google.cloud.redis.v1.CreateInstanceRequest.google.cloud.redis.v1.CreateInstanceRequest.parent
artman> Traceback (most recent call last):
File ""/artman/artman/cli/main.py"", line 72, in main
engine.run()
File ""/usr/local/lib/python3.5/dist-packages/taskflow/engines/action_engine/engine.py"", line 247, in run
for _state in self.run_iter(timeout=timeout):
File ""/usr/local/lib/python3.5/dist-packages/taskflow/engines/action_engine/engine.py"", line 340, in run_iter
failure.Failure.reraise_if_any(er_failures)
File ""/usr/local/lib/python3.5/dist-packages/taskflow/types/failure.py"", line 339, in reraise_if_any
failures[0].reraise()
File ""/usr/local/lib/python3.5/dist-packages/taskflow/types/failure.py"", line 346, in reraise
six.reraise(*self._exc_info)
File ""/usr/local/lib/python3.5/dist-packages/six.py"", line 693, in reraise
raise value
File ""/usr/local/lib/python3.5/dist-packages/taskflow/engines/action_engine/executor.py"", line 53, in _execute_task
result = task.execute(**arguments)
File ""/artman/artman/tasks/gapic_tasks.py"", line 146, in execute
task_utils.gapic_gen_task(toolkit_path, [gapic_artifact] + args))
File ""/artman/artman/tasks/task_base.py"", line 64, in exec_command
raise e
File ""/artman/artman/tasks/task_base.py"", line 56, in exec_command
output = subprocess.check_output(args, stderr=subprocess.STDOUT)
File ""/usr/lib/python3.5/subprocess.py"", line 626, in check_output
**kwargs).stdout
File ""/usr/lib/python3.5/subprocess.py"", line 708, in run
output=stdout, stderr=stderr)
subprocess.CalledProcessError: Command '['java', '-cp', '/toolkit/build/libs/gapic-generator-latest-fatjar.jar', 'com.google.api.codegen.GeneratorMain', 'LEGACY_GAPIC_AND_PACKAGE', '--descriptor_set=/home/kbuilder/.cache/synthtool/googleapis/artman-genfiles/google-cloud-redis-v1.desc', '--package_yaml2=/home/kbuilder/.cache/synthtool/googleapis/artman-genfiles/ruby_google-cloud-redis-v1_package2.yaml', '--output=/home/kbuilder/.cache/synthtool/googleapis/artman-genfiles/ruby/google-cloud-ruby/google-cloud-redis', '--language=ruby', '--service_yaml=/home/kbuilder/.cache/synthtool/googleapis/google/cloud/redis/redis_v1.yaml', '--gapic_yaml=/home/kbuilder/.cache/synthtool/googleapis/google/cloud/redis/v1/redis_gapic.yaml']' returned non-zero exit status 1
Traceback (most recent call last):
File ""/home/kbuilder/.pyenv/versions/3.6.1/lib/python3.6/runpy.py"", line 193, in _run_module_as_main
""__main__"", mod_spec)
File ""/home/kbuilder/.pyenv/versions/3.6.1/lib/python3.6/runpy.py"", line 85, in _run_code
exec(code, run_globals)
File ""/tmpfs/src/git/autosynth/env/lib/python3.6/site-packages/synthtool/__main__.py"", line 87, in
main()
File ""/tmpfs/src/git/autosynth/env/lib/python3.6/site-packages/click/core.py"", line 764, in __call__
return self.main(*args, **kwargs)
File ""/tmpfs/src/git/autosynth/env/lib/python3.6/site-packages/click/core.py"", line 717, in main
rv = self.invoke(ctx)
File ""/tmpfs/src/git/autosynth/env/lib/python3.6/site-packages/click/core.py"", line 956, in invoke
return ctx.invoke(self.callback, **ctx.params)
File ""/tmpfs/src/git/autosynth/env/lib/python3.6/site-packages/click/core.py"", line 555, in invoke
return callback(*args, **kwargs)
File ""/tmpfs/src/git/autosynth/env/lib/python3.6/site-packages/synthtool/__main__.py"", line 79, in main
spec.loader.exec_module(synth_module) # type: ignore
File """", line 678, in exec_module
File """", line 205, in _call_with_frames_removed
File ""/tmpfs/src/git/autosynth/working_repo/google-cloud-redis/synth.py"", line 30, in
artman_output_name='google-cloud-ruby/google-cloud-redis'
File ""/tmpfs/src/git/autosynth/env/lib/python3.6/site-packages/synthtool/gcp/gapic_generator.py"", line 58, in ruby_library
return self._generate_code(service, version, ""ruby"", **kwargs)
File ""/tmpfs/src/git/autosynth/env/lib/python3.6/site-packages/synthtool/gcp/gapic_generator.py"", line 138, in _generate_code
generator_args=generator_args,
File ""/tmpfs/src/git/autosynth/env/lib/python3.6/site-packages/synthtool/gcp/artman.py"", line 141, in run
shell.run(cmd, cwd=root_dir)
File ""/tmpfs/src/git/autosynth/env/lib/python3.6/site-packages/synthtool/shell.py"", line 39, in run
raise exc
File ""/tmpfs/src/git/autosynth/env/lib/python3.6/site-packages/synthtool/shell.py"", line 33, in run
encoding=""utf-8"",
File ""/home/kbuilder/.pyenv/versions/3.6.1/lib/python3.6/subprocess.py"", line 418, in run
output=stdout, stderr=stderr)
subprocess.CalledProcessError: Command '['docker', 'run', '--name', 'artman-docker', '--rm', '-i', '-e', 'HOST_USER_ID=1000', '-e', 'HOST_GROUP_ID=1000', '-e', 'RUNNING_IN_ARTMAN_DOCKER=True', '-v', '/home/kbuilder/.cache/synthtool/googleapis:/home/kbuilder/.cache/synthtool/googleapis', '-v', '/home/kbuilder/.cache/synthtool/googleapis/artman-genfiles:/home/kbuilder/.cache/synthtool/googleapis/artman-genfiles', '-w', PosixPath('/home/kbuilder/.cache/synthtool/googleapis'), 'googleapis/artman:latest', '/bin/bash', '-c', 'artman --local --config google/cloud/redis/artman_redis_v1.yaml generate ruby_gapic']' returned non-zero exit status 32.
synthtool > Cleaned up 0 temporary directories.
synthtool > Wrote metadata to synth.metadata.
Synthesis failed
```
Google internal developers can see the full log [here](https://sponge/33d501c0-dede-4a59-8f50-73663fbab2f6).
",1.0,"Synthesis failed for redis - Hello! Autosynth couldn't regenerate redis. :broken_heart:
Here's the output from running `synth.py`:
```
Cloning into 'working_repo'...
Switched to branch 'autosynth-redis'
Running synthtool
['/tmpfs/src/git/autosynth/env/bin/python3', '-m', 'synthtool', 'synth.py', '--']
synthtool > Executing /tmpfs/src/git/autosynth/working_repo/google-cloud-redis/synth.py.
synthtool > Ensuring dependencies.
synthtool > Pulling artman image.
latest: Pulling from googleapis/artman
Digest: sha256:66ca01f27ef7dc50fbfb7743b67028115a6a8acf43b2d82f9fc826de008adac4
Status: Image is up to date for googleapis/artman:latest
synthtool > Cloning googleapis.
synthtool > Running generator for google/cloud/redis/artman_redis_v1.yaml.
synthtool > Failed executing docker run --name artman-docker --rm -i -e HOST_USER_ID=1000 -e HOST_GROUP_ID=1000 -e RUNNING_IN_ARTMAN_DOCKER=True -v /home/kbuilder/.cache/synthtool/googleapis:/home/kbuilder/.cache/synthtool/googleapis -v /home/kbuilder/.cache/synthtool/googleapis/artman-genfiles:/home/kbuilder/.cache/synthtool/googleapis/artman-genfiles -w /home/kbuilder/.cache/synthtool/googleapis googleapis/artman:latest /bin/bash -c artman --local --config google/cloud/redis/artman_redis_v1.yaml generate ruby_gapic:
artman> Final args:
artman> api_name: redis
artman> api_version: v1
artman> artifact_type: GAPIC
artman> aspect: ALL
artman> gapic_code_dir: /home/kbuilder/.cache/synthtool/googleapis/artman-genfiles/ruby/google-cloud-ruby/google-cloud-redis
artman> gapic_yaml: /home/kbuilder/.cache/synthtool/googleapis/google/cloud/redis/v1/redis_gapic.yaml
artman> generator_args: null
artman> import_proto_path:
artman> - /home/kbuilder/.cache/synthtool/googleapis
artman> language: ruby
artman> organization_name: google-cloud
artman> output_dir: /home/kbuilder/.cache/synthtool/googleapis/artman-genfiles
artman> proto_deps:
artman> - name: google-common-protos
artman> proto_package: ''
artman> root_dir: /home/kbuilder/.cache/synthtool/googleapis
artman> samples: ''
artman> service_yaml: /home/kbuilder/.cache/synthtool/googleapis/google/cloud/redis/redis_v1.yaml
artman> src_proto_path:
artman> - /home/kbuilder/.cache/synthtool/googleapis/google/cloud/redis/v1
artman> toolkit_path: /toolkit
artman>
artman> Creating GapicClientPipeline.
artman.output >
WARNING: toplevel: (lint) control-presence: Service redis.googleapis.com does not have control environment configured.
ERROR: toplevel: Reference to unknown type ""locations.googleapis.com/Location"" on field google.cloud.redis.v1.ListInstancesRequest.google.cloud.redis.v1.ListInstancesRequest.parent
ERROR: toplevel: Reference to unknown type ""locations.googleapis.com/Location"" on field google.cloud.redis.v1.CreateInstanceRequest.google.cloud.redis.v1.CreateInstanceRequest.parent
WARNING: toplevel: (lint) control-presence: Service redis.googleapis.com does not have control environment configured.
ERROR: toplevel: Reference to unknown type ""locations.googleapis.com/Location"" on field google.cloud.redis.v1.ListInstancesRequest.google.cloud.redis.v1.ListInstancesRequest.parent
ERROR: toplevel: Reference to unknown type ""locations.googleapis.com/Location"" on field google.cloud.redis.v1.CreateInstanceRequest.google.cloud.redis.v1.CreateInstanceRequest.parent
artman> Traceback (most recent call last):
File ""/artman/artman/cli/main.py"", line 72, in main
engine.run()
File ""/usr/local/lib/python3.5/dist-packages/taskflow/engines/action_engine/engine.py"", line 247, in run
for _state in self.run_iter(timeout=timeout):
File ""/usr/local/lib/python3.5/dist-packages/taskflow/engines/action_engine/engine.py"", line 340, in run_iter
failure.Failure.reraise_if_any(er_failures)
File ""/usr/local/lib/python3.5/dist-packages/taskflow/types/failure.py"", line 339, in reraise_if_any
failures[0].reraise()
File ""/usr/local/lib/python3.5/dist-packages/taskflow/types/failure.py"", line 346, in reraise
six.reraise(*self._exc_info)
File ""/usr/local/lib/python3.5/dist-packages/six.py"", line 693, in reraise
raise value
File ""/usr/local/lib/python3.5/dist-packages/taskflow/engines/action_engine/executor.py"", line 53, in _execute_task
result = task.execute(**arguments)
File ""/artman/artman/tasks/gapic_tasks.py"", line 146, in execute
task_utils.gapic_gen_task(toolkit_path, [gapic_artifact] + args))
File ""/artman/artman/tasks/task_base.py"", line 64, in exec_command
raise e
File ""/artman/artman/tasks/task_base.py"", line 56, in exec_command
output = subprocess.check_output(args, stderr=subprocess.STDOUT)
File ""/usr/lib/python3.5/subprocess.py"", line 626, in check_output
**kwargs).stdout
File ""/usr/lib/python3.5/subprocess.py"", line 708, in run
output=stdout, stderr=stderr)
subprocess.CalledProcessError: Command '['java', '-cp', '/toolkit/build/libs/gapic-generator-latest-fatjar.jar', 'com.google.api.codegen.GeneratorMain', 'LEGACY_GAPIC_AND_PACKAGE', '--descriptor_set=/home/kbuilder/.cache/synthtool/googleapis/artman-genfiles/google-cloud-redis-v1.desc', '--package_yaml2=/home/kbuilder/.cache/synthtool/googleapis/artman-genfiles/ruby_google-cloud-redis-v1_package2.yaml', '--output=/home/kbuilder/.cache/synthtool/googleapis/artman-genfiles/ruby/google-cloud-ruby/google-cloud-redis', '--language=ruby', '--service_yaml=/home/kbuilder/.cache/synthtool/googleapis/google/cloud/redis/redis_v1.yaml', '--gapic_yaml=/home/kbuilder/.cache/synthtool/googleapis/google/cloud/redis/v1/redis_gapic.yaml']' returned non-zero exit status 1
Traceback (most recent call last):
File ""/home/kbuilder/.pyenv/versions/3.6.1/lib/python3.6/runpy.py"", line 193, in _run_module_as_main
""__main__"", mod_spec)
File ""/home/kbuilder/.pyenv/versions/3.6.1/lib/python3.6/runpy.py"", line 85, in _run_code
exec(code, run_globals)
File ""/tmpfs/src/git/autosynth/env/lib/python3.6/site-packages/synthtool/__main__.py"", line 87, in
main()
File ""/tmpfs/src/git/autosynth/env/lib/python3.6/site-packages/click/core.py"", line 764, in __call__
return self.main(*args, **kwargs)
File ""/tmpfs/src/git/autosynth/env/lib/python3.6/site-packages/click/core.py"", line 717, in main
rv = self.invoke(ctx)
File ""/tmpfs/src/git/autosynth/env/lib/python3.6/site-packages/click/core.py"", line 956, in invoke
return ctx.invoke(self.callback, **ctx.params)
File ""/tmpfs/src/git/autosynth/env/lib/python3.6/site-packages/click/core.py"", line 555, in invoke
return callback(*args, **kwargs)
File ""/tmpfs/src/git/autosynth/env/lib/python3.6/site-packages/synthtool/__main__.py"", line 79, in main
spec.loader.exec_module(synth_module) # type: ignore
File """", line 678, in exec_module
File """", line 205, in _call_with_frames_removed
File ""/tmpfs/src/git/autosynth/working_repo/google-cloud-redis/synth.py"", line 30, in
artman_output_name='google-cloud-ruby/google-cloud-redis'
File ""/tmpfs/src/git/autosynth/env/lib/python3.6/site-packages/synthtool/gcp/gapic_generator.py"", line 58, in ruby_library
return self._generate_code(service, version, ""ruby"", **kwargs)
File ""/tmpfs/src/git/autosynth/env/lib/python3.6/site-packages/synthtool/gcp/gapic_generator.py"", line 138, in _generate_code
generator_args=generator_args,
File ""/tmpfs/src/git/autosynth/env/lib/python3.6/site-packages/synthtool/gcp/artman.py"", line 141, in run
shell.run(cmd, cwd=root_dir)
File ""/tmpfs/src/git/autosynth/env/lib/python3.6/site-packages/synthtool/shell.py"", line 39, in run
raise exc
File ""/tmpfs/src/git/autosynth/env/lib/python3.6/site-packages/synthtool/shell.py"", line 33, in run
encoding=""utf-8"",
File ""/home/kbuilder/.pyenv/versions/3.6.1/lib/python3.6/subprocess.py"", line 418, in run
output=stdout, stderr=stderr)
subprocess.CalledProcessError: Command '['docker', 'run', '--name', 'artman-docker', '--rm', '-i', '-e', 'HOST_USER_ID=1000', '-e', 'HOST_GROUP_ID=1000', '-e', 'RUNNING_IN_ARTMAN_DOCKER=True', '-v', '/home/kbuilder/.cache/synthtool/googleapis:/home/kbuilder/.cache/synthtool/googleapis', '-v', '/home/kbuilder/.cache/synthtool/googleapis/artman-genfiles:/home/kbuilder/.cache/synthtool/googleapis/artman-genfiles', '-w', PosixPath('/home/kbuilder/.cache/synthtool/googleapis'), 'googleapis/artman:latest', '/bin/bash', '-c', 'artman --local --config google/cloud/redis/artman_redis_v1.yaml generate ruby_gapic']' returned non-zero exit status 32.
synthtool > Cleaned up 0 temporary directories.
synthtool > Wrote metadata to synth.metadata.
Synthesis failed
```
Google internal developers can see the full log [here](https://sponge/33d501c0-dede-4a59-8f50-73663fbab2f6).
",0,synthesis failed for redis hello autosynth couldn t regenerate redis broken heart here s the output from running synth py cloning into working repo switched to branch autosynth redis running synthtool synthtool executing tmpfs src git autosynth working repo google cloud redis synth py synthtool ensuring dependencies synthtool pulling artman image latest pulling from googleapis artman digest status image is up to date for googleapis artman latest synthtool cloning googleapis synthtool running generator for google cloud redis artman redis yaml synthtool failed executing docker run name artman docker rm i e host user id e host group id e running in artman docker true v home kbuilder cache synthtool googleapis home kbuilder cache synthtool googleapis v home kbuilder cache synthtool googleapis artman genfiles home kbuilder cache synthtool googleapis artman genfiles w home kbuilder cache synthtool googleapis googleapis artman latest bin bash c artman local config google cloud redis artman redis yaml generate ruby gapic artman final args artman api name redis artman api version artman artifact type gapic artman aspect all artman gapic code dir home kbuilder cache synthtool googleapis artman genfiles ruby google cloud ruby google cloud redis artman gapic yaml home kbuilder cache synthtool googleapis google cloud redis redis gapic yaml artman generator args null artman import proto path artman home kbuilder cache synthtool googleapis artman language ruby artman organization name google cloud artman output dir home kbuilder cache synthtool googleapis artman genfiles artman proto deps artman name google common protos artman proto package artman root dir home kbuilder cache synthtool googleapis artman samples artman service yaml home kbuilder cache synthtool googleapis google cloud redis redis yaml artman src proto path artman home kbuilder cache synthtool googleapis google cloud redis artman toolkit path toolkit artman artman creating gapicclientpipeline artman output warning toplevel lint control presence service redis googleapis com does not have control environment configured error toplevel reference to unknown type locations googleapis com location on field google cloud redis listinstancesrequest google cloud redis listinstancesrequest parent error toplevel reference to unknown type locations googleapis com location on field google cloud redis createinstancerequest google cloud redis createinstancerequest parent warning toplevel lint control presence service redis googleapis com does not have control environment configured error toplevel reference to unknown type locations googleapis com location on field google cloud redis listinstancesrequest google cloud redis listinstancesrequest parent error toplevel reference to unknown type locations googleapis com location on field google cloud redis createinstancerequest google cloud redis createinstancerequest parent artman traceback most recent call last file artman artman cli main py line in main engine run file usr local lib dist packages taskflow engines action engine engine py line in run for state in self run iter timeout timeout file usr local lib dist packages taskflow engines action engine engine py line in run iter failure failure reraise if any er failures file usr local lib dist packages taskflow types failure py line in reraise if any failures reraise file usr local lib dist packages taskflow types failure py line in reraise six reraise self exc info file usr local lib dist packages six py line in reraise raise value file usr local lib dist packages taskflow engines action engine executor py line in execute task result task execute arguments file artman artman tasks gapic tasks py line in execute task utils gapic gen task toolkit path args file artman artman tasks task base py line in exec command raise e file artman artman tasks task base py line in exec command output subprocess check output args stderr subprocess stdout file usr lib subprocess py line in check output kwargs stdout file usr lib subprocess py line in run output stdout stderr stderr subprocess calledprocesserror command returned non zero exit status traceback most recent call last file home kbuilder pyenv versions lib runpy py line in run module as main main mod spec file home kbuilder pyenv versions lib runpy py line in run code exec code run globals file tmpfs src git autosynth env lib site packages synthtool main py line in main file tmpfs src git autosynth env lib site packages click core py line in call return self main args kwargs file tmpfs src git autosynth env lib site packages click core py line in main rv self invoke ctx file tmpfs src git autosynth env lib site packages click core py line in invoke return ctx invoke self callback ctx params file tmpfs src git autosynth env lib site packages click core py line in invoke return callback args kwargs file tmpfs src git autosynth env lib site packages synthtool main py line in main spec loader exec module synth module type ignore file line in exec module file line in call with frames removed file tmpfs src git autosynth working repo google cloud redis synth py line in artman output name google cloud ruby google cloud redis file tmpfs src git autosynth env lib site packages synthtool gcp gapic generator py line in ruby library return self generate code service version ruby kwargs file tmpfs src git autosynth env lib site packages synthtool gcp gapic generator py line in generate code generator args generator args file tmpfs src git autosynth env lib site packages synthtool gcp artman py line in run shell run cmd cwd root dir file tmpfs src git autosynth env lib site packages synthtool shell py line in run raise exc file tmpfs src git autosynth env lib site packages synthtool shell py line in run encoding utf file home kbuilder pyenv versions lib subprocess py line in run output stdout stderr stderr subprocess calledprocesserror command returned non zero exit status synthtool cleaned up temporary directories synthtool wrote metadata to synth metadata synthesis failed google internal developers can see the full log ,0
372816,26021595169.0,IssuesEvent,2022-12-21 13:07:04,mathew-fleisch/bashbot,https://api.github.com/repos/mathew-fleisch/bashbot,closed,Document Makefile,documentation,The makefile has some useful helper command/targets that can be used to build or install existing binaries on a host machine.,1.0,Document Makefile - The makefile has some useful helper command/targets that can be used to build or install existing binaries on a host machine.,0,document makefile the makefile has some useful helper command targets that can be used to build or install existing binaries on a host machine ,0
312756,9552835901.0,IssuesEvent,2019-05-02 17:40:47,WoWManiaUK/Blackwing-Lair,https://api.github.com/repos/WoWManiaUK/Blackwing-Lair,closed,[Quest/Order] Hero of the Sin'dorei - prerequisite missing,Fixed in Dev Priority zone 1-20,"https://www.wowhead.com/quest=9328/hero-of-the-sindorei
Should only become available to pick up from https://www.wowhead.com/npc=16239/magister-kaendris once you completed https://www.wowhead.com/quest=9167/the-traitors-destruction
Currently available to pick up without completing the previous chain.",1.0,"[Quest/Order] Hero of the Sin'dorei - prerequisite missing - https://www.wowhead.com/quest=9328/hero-of-the-sindorei
Should only become available to pick up from https://www.wowhead.com/npc=16239/magister-kaendris once you completed https://www.wowhead.com/quest=9167/the-traitors-destruction
Currently available to pick up without completing the previous chain.",0, hero of the sin dorei prerequisite missing should only become available to pick up from once you completed currently available to pick up without completing the previous chain ,0
6693,23744158689.0,IssuesEvent,2022-08-31 14:41:11,home-assistant/home-assistant.io,https://api.github.com/repos/home-assistant/home-assistant.io,closed,Information about trigger duration can not be found on the trigger page,automation,"### Feedback
adding a trigger to an automation in the home assistant UI, you are given a field labeled ""Duration (optional)""
I clicked the ""learn more about triggers"" link to figure out what that meant, but could not find anything relevant on the linked page. I'm sure i could try hunting with google, but it would be much more accessible to just have help about that field on the linked page.
Thank you :)
### URL
https://www.home-assistant.io/docs/automation/trigger/
### Version
2022.8.7
### Additional information

",1.0,"Information about trigger duration can not be found on the trigger page - ### Feedback
adding a trigger to an automation in the home assistant UI, you are given a field labeled ""Duration (optional)""
I clicked the ""learn more about triggers"" link to figure out what that meant, but could not find anything relevant on the linked page. I'm sure i could try hunting with google, but it would be much more accessible to just have help about that field on the linked page.
Thank you :)
### URL
https://www.home-assistant.io/docs/automation/trigger/
### Version
2022.8.7
### Additional information

",1,information about trigger duration can not be found on the trigger page feedback adding a trigger to an automation in the home assistant ui you are given a field labeled duration optional i clicked the learn more about triggers link to figure out what that meant but could not find anything relevant on the linked page i m sure i could try hunting with google but it would be much more accessible to just have help about that field on the linked page thank you url version additional information ,1
333369,10121030483.0,IssuesEvent,2019-07-31 14:50:59,BWRat/DES506_Oneiro,https://api.github.com/repos/BWRat/DES506_Oneiro,closed,Rudimentry ladder,Priority C,The ladder acts like a trap. Player cannot cancel climbing and must finish the whole action before they can do anything else. ,1.0,Rudimentry ladder - The ladder acts like a trap. Player cannot cancel climbing and must finish the whole action before they can do anything else. ,0,rudimentry ladder the ladder acts like a trap player cannot cancel climbing and must finish the whole action before they can do anything else ,0
8905,27190125544.0,IssuesEvent,2023-02-19 17:50:01,AnthonyMonterrosa/C-sharp-service-stack,https://api.github.com/repos/AnthonyMonterrosa/C-sharp-service-stack,closed,Separate GitHub Actions Build and Test into Separate Jobs.,automation enhancement,"Currently, the GitHub Action that is run as a PR check does the build and tests in one job. It is preferred that they are separate jobs so, if either fail, we can see which did fail at a glance while still in the PR's webpage.",1.0,"Separate GitHub Actions Build and Test into Separate Jobs. - Currently, the GitHub Action that is run as a PR check does the build and tests in one job. It is preferred that they are separate jobs so, if either fail, we can see which did fail at a glance while still in the PR's webpage.",1,separate github actions build and test into separate jobs currently the github action that is run as a pr check does the build and tests in one job it is preferred that they are separate jobs so if either fail we can see which did fail at a glance while still in the pr s webpage ,1
100974,21562551057.0,IssuesEvent,2022-05-01 11:36:46,joomla/joomla-cms,https://api.github.com/repos/joomla/joomla-cms,closed,[4.1.x] Cassiopea Registration page Privacy/Terms alignment,New Feature No Code Attached Yet J4 Frontend Template,"Hi guys,
about the [Cassiopea Registration page Privacy/Terms alignment](https://photos.app.goo.gl/6VZuH7Ja7RQ8yfKt5 ""Registration Privacy/Terms""),
Should not be better to add by default:
.required.radio {
display: inline-flex;
gap: 1rem;
}
to align them horizontally and don't waste precious space ? ",1.0,"[4.1.x] Cassiopea Registration page Privacy/Terms alignment - Hi guys,
about the [Cassiopea Registration page Privacy/Terms alignment](https://photos.app.goo.gl/6VZuH7Ja7RQ8yfKt5 ""Registration Privacy/Terms""),
Should not be better to add by default:
.required.radio {
display: inline-flex;
gap: 1rem;
}
to align them horizontally and don't waste precious space ? ",0, cassiopea registration page privacy terms alignment hi guys about the registration privacy terms should not be better to add by default required radio display inline flex gap to align them horizontally and don t waste precious space ,0
108645,11597422869.0,IssuesEvent,2020-02-24 20:50:45,BIAPT/Scripts,https://api.github.com/repos/BIAPT/Scripts,closed,Visualize the step-wise wPLI matrices to ensure that the analysis is correct,documentation enhancement,"Here the objectives are simple for the first experiment, we need to generate wPLI matrices with a small step (1 seconds for starting) and visualize the result properties. The analysis documentation should be done on the README.md.",1.0,"Visualize the step-wise wPLI matrices to ensure that the analysis is correct - Here the objectives are simple for the first experiment, we need to generate wPLI matrices with a small step (1 seconds for starting) and visualize the result properties. The analysis documentation should be done on the README.md.",0,visualize the step wise wpli matrices to ensure that the analysis is correct here the objectives are simple for the first experiment we need to generate wpli matrices with a small step seconds for starting and visualize the result properties the analysis documentation should be done on the readme md ,0
113072,11787059955.0,IssuesEvent,2020-03-17 13:25:42,zilliztech/arctern,https://api.github.com/repos/zilliztech/arctern,opened,Set up a local Conda channel for installing the Arctern,arctern-0.1.0 documentation,"## Report needed documentation
**Describe the documentation you'd like**
Set up a local Conda channel for installing the Arctern",1.0,"Set up a local Conda channel for installing the Arctern - ## Report needed documentation
**Describe the documentation you'd like**
Set up a local Conda channel for installing the Arctern",0,set up a local conda channel for installing the arctern report needed documentation describe the documentation you d like set up a local conda channel for installing the arctern,0
250867,27115567772.0,IssuesEvent,2023-02-15 18:17:21,cosmos/ibc-rs,https://api.github.com/repos/cosmos/ibc-rs,closed,Remove `todo!()`s for tendermint `ClientState`,A: good-first-issue A: urgent A: critical O: security,"There are 3 `todo!()`s to be removed ([one](https://github.com/cosmos/ibc-rs/blob/51ddc415db14241790459208f74451628491ee6c/crates/ibc/src/clients/ics07_tendermint/client_state.rs#L740), [two](https://github.com/cosmos/ibc-rs/blob/51ddc415db14241790459208f74451628491ee6c/crates/ibc/src/clients/ics07_tendermint/client_state.rs#L803) and [three](https://github.com/cosmos/ibc-rs/blob/51ddc415db14241790459208f74451628491ee6c/crates/ibc/src/clients/ics07_tendermint/client_state.rs#L830)).",True,"Remove `todo!()`s for tendermint `ClientState` - There are 3 `todo!()`s to be removed ([one](https://github.com/cosmos/ibc-rs/blob/51ddc415db14241790459208f74451628491ee6c/crates/ibc/src/clients/ics07_tendermint/client_state.rs#L740), [two](https://github.com/cosmos/ibc-rs/blob/51ddc415db14241790459208f74451628491ee6c/crates/ibc/src/clients/ics07_tendermint/client_state.rs#L803) and [three](https://github.com/cosmos/ibc-rs/blob/51ddc415db14241790459208f74451628491ee6c/crates/ibc/src/clients/ics07_tendermint/client_state.rs#L830)).",0,remove todo s for tendermint clientstate there are todo s to be removed and ,0
20055,13643200242.0,IssuesEvent,2020-09-25 16:43:30,niconoe/pyinaturalist,https://api.github.com/repos/niconoe/pyinaturalist,opened,Drop support for Python 3.5,dependencies infrastructure,"[Python 3.5 reached EOL on 2020-09-13](https://devguide.python.org/#status-of-python-branches). I think it would be reasonable to keep pyinaturalist compatible with Python 3.5 through v0.11 For v0.12+, we can remove 3.5 from the tox tests and start making use of Python 3.6 features: f-strings, type annotations for variables, etc.",1.0,"Drop support for Python 3.5 - [Python 3.5 reached EOL on 2020-09-13](https://devguide.python.org/#status-of-python-branches). I think it would be reasonable to keep pyinaturalist compatible with Python 3.5 through v0.11 For v0.12+, we can remove 3.5 from the tox tests and start making use of Python 3.6 features: f-strings, type annotations for variables, etc.",0,drop support for python i think it would be reasonable to keep pyinaturalist compatible with python through for we can remove from the tox tests and start making use of python features f strings type annotations for variables etc ,0
256591,19429091448.0,IssuesEvent,2021-12-21 09:50:33,Schlaue-Lise-IT-Project/schlaue-lise,https://api.github.com/repos/Schlaue-Lise-IT-Project/schlaue-lise,closed,Überarbeiten der README & des User Manuals,documentation,"Das [README](https://github.com/Schlaue-Lise-IT-Project/schlaue-lise/blob/main/README.md) und das [User Manual](https://github.com/Schlaue-Lise-IT-Project/schlaue-lise/blob/main/user-manual.md) müssen überarbeitet bzw. ausgefüllt werden.
Folgende Dinge gilt es dabei zu beachten:
- Das ganze ist unser _Bericht_, achtet also darauf, dass dort sinnvolle Sachen stehen
- Es ist auf gendergerechte Sprache zu achten; wir verwenden den Binnendoppelpunkt (bspw. Anwender:innen)
- Nutzt Absätze in langen Texten, das wird sonst zu anstrengend zu lesen
- Nutzt die Backticks (\`\`), wenn ihr etwas hervorheben wollt, was mit dem Code zu tun hat oder anstelle von Anführungszeichen (bspw. `conda`)
Aktuelle Tasks (bitte erweitern)
- [x] User Manual (Schlafen)
- [x] User Manual (Hygiene)
- [x] User Manual (Spenden)
- [x] User Manual (Medizin)
- [x] README",1.0,"Überarbeiten der README & des User Manuals - Das [README](https://github.com/Schlaue-Lise-IT-Project/schlaue-lise/blob/main/README.md) und das [User Manual](https://github.com/Schlaue-Lise-IT-Project/schlaue-lise/blob/main/user-manual.md) müssen überarbeitet bzw. ausgefüllt werden.
Folgende Dinge gilt es dabei zu beachten:
- Das ganze ist unser _Bericht_, achtet also darauf, dass dort sinnvolle Sachen stehen
- Es ist auf gendergerechte Sprache zu achten; wir verwenden den Binnendoppelpunkt (bspw. Anwender:innen)
- Nutzt Absätze in langen Texten, das wird sonst zu anstrengend zu lesen
- Nutzt die Backticks (\`\`), wenn ihr etwas hervorheben wollt, was mit dem Code zu tun hat oder anstelle von Anführungszeichen (bspw. `conda`)
Aktuelle Tasks (bitte erweitern)
- [x] User Manual (Schlafen)
- [x] User Manual (Hygiene)
- [x] User Manual (Spenden)
- [x] User Manual (Medizin)
- [x] README",0,überarbeiten der readme des user manuals das und das müssen überarbeitet bzw ausgefüllt werden folgende dinge gilt es dabei zu beachten das ganze ist unser bericht achtet also darauf dass dort sinnvolle sachen stehen es ist auf gendergerechte sprache zu achten wir verwenden den binnendoppelpunkt bspw anwender innen nutzt absätze in langen texten das wird sonst zu anstrengend zu lesen nutzt die backticks wenn ihr etwas hervorheben wollt was mit dem code zu tun hat oder anstelle von anführungszeichen bspw conda aktuelle tasks bitte erweitern user manual schlafen user manual hygiene user manual spenden user manual medizin readme,0
1580,10352913521.0,IssuesEvent,2019-09-05 10:17:14,big-neon/bn-web,https://api.github.com/repos/big-neon/bn-web,opened,Automation: Big Neon : Test 20: Refund Tickets,Automation,"**Pre-conditions:**
- User should have Admin access
- Event the user is selecting should have tickets that have been purchased.
**Steps:**
1. Log in as Admin with permission
2. Click on event on the left side bar
3. Click on the event
4. Go to ""Dashboard""
5. Click on ""Tools""
6. Click on ""Manage Orders""
7. Select the ticket which needs to be refunded
8. Confirm if refund amount is correct
9. Click on ""Refund"" in the bottom right corner after ticket/s are selected
10. ReFund must be successful",1.0,"Automation: Big Neon : Test 20: Refund Tickets - **Pre-conditions:**
- User should have Admin access
- Event the user is selecting should have tickets that have been purchased.
**Steps:**
1. Log in as Admin with permission
2. Click on event on the left side bar
3. Click on the event
4. Go to ""Dashboard""
5. Click on ""Tools""
6. Click on ""Manage Orders""
7. Select the ticket which needs to be refunded
8. Confirm if refund amount is correct
9. Click on ""Refund"" in the bottom right corner after ticket/s are selected
10. ReFund must be successful",1,automation big neon test refund tickets pre conditions user should have admin access event the user is selecting should have tickets that have been purchased steps log in as admin with permission click on event on the left side bar click on the event go to dashboard click on tools click on manage orders select the ticket which needs to be refunded confirm if refund amount is correct click on refund in the bottom right corner after ticket s are selected refund must be successful,1
259681,22504665755.0,IssuesEvent,2022-06-23 14:34:51,MPMG-DCC-UFMG/F01,https://api.github.com/repos/MPMG-DCC-UFMG/F01,opened,Teste de generalizacao para a tag Terceiro Setor - Dados de Parcerias - Minduri,generalization test development,DoD: Realizar o teste de Generalização do validador da tag Terceiro Setor - Dados de Parcerias para o Município de Minduri.,1.0,Teste de generalizacao para a tag Terceiro Setor - Dados de Parcerias - Minduri - DoD: Realizar o teste de Generalização do validador da tag Terceiro Setor - Dados de Parcerias para o Município de Minduri.,0,teste de generalizacao para a tag terceiro setor dados de parcerias minduri dod realizar o teste de generalização do validador da tag terceiro setor dados de parcerias para o município de minduri ,0
2412,11899473458.0,IssuesEvent,2020-03-30 09:03:54,elastic/beats,https://api.github.com/repos/elastic/beats,closed,[ci] Enable Jenkinsfile Pipeline for master and 7.x,[zube]: In Review automation ci,"We want to enable the Jenkinsfile based pipeline build for changes that affect master and 7.x, including pull-requests. And then disable the old Jenkins build for those same targets.
",1.0,"[ci] Enable Jenkinsfile Pipeline for master and 7.x - We want to enable the Jenkinsfile based pipeline build for changes that affect master and 7.x, including pull-requests. And then disable the old Jenkins build for those same targets.
",1, enable jenkinsfile pipeline for master and x we want to enable the jenkinsfile based pipeline build for changes that affect master and x including pull requests and then disable the old jenkins build for those same targets ,1
416111,28067274243.0,IssuesEvent,2023-03-29 16:18:50,microsoft/studentambassadors,https://api.github.com/repos/microsoft/studentambassadors,closed,Cognitive Services API frontend bug,documentation wontfix AI,"## Describe the bug
Every time the drop-down menu in ""Name"" is clicked, the display text changes. The same text is displayed for both 1st and 2nd options in the drop-down menu.
## To Reproduce
Steps to reproduce the behavior:
1. Go to https://westeurope.dev.cognitive.microsoft.com/docs/services/speech-to-text-api-v3-0/operations/CreateTranscription/console
2. Click on drop-down menu of ""Name""
4. See the text displayed in box
5. Select the 1st and 2nd options (same text is displayed)
## Expected behavior
The selected option from the drop-down menu should be displayed correctly
## Screenshots


### Desktop (please complete the following information):
- OS: Windows 10
- Browser: Microsoft Edge
- Version: 110.0.1587.50
#### 🎓 Add a tag to this issue for your current education role: **Student Ambassador**
***
",1.0,"Cognitive Services API frontend bug - ## Describe the bug
Every time the drop-down menu in ""Name"" is clicked, the display text changes. The same text is displayed for both 1st and 2nd options in the drop-down menu.
## To Reproduce
Steps to reproduce the behavior:
1. Go to https://westeurope.dev.cognitive.microsoft.com/docs/services/speech-to-text-api-v3-0/operations/CreateTranscription/console
2. Click on drop-down menu of ""Name""
4. See the text displayed in box
5. Select the 1st and 2nd options (same text is displayed)
## Expected behavior
The selected option from the drop-down menu should be displayed correctly
## Screenshots


### Desktop (please complete the following information):
- OS: Windows 10
- Browser: Microsoft Edge
- Version: 110.0.1587.50
#### 🎓 Add a tag to this issue for your current education role: **Student Ambassador**
***
",0,cognitive services api frontend bug describe the bug every time the drop down menu in name is clicked the display text changes the same text is displayed for both and options in the drop down menu to reproduce steps to reproduce the behavior go to click on drop down menu of name see the text displayed in box select the and options same text is displayed expected behavior the selected option from the drop down menu should be displayed correctly screenshots desktop please complete the following information os windows browser microsoft edge version 🎓 add a tag to this issue for your current education role student ambassador ,0
4448,16566018550.0,IssuesEvent,2021-05-29 12:28:31,home-assistant/core,https://api.github.com/repos/home-assistant/core,closed,Missing illuminance in Automation (GZCGQ01LM),integration: device_automation,"### The problem
I'm missing with Xiaomi (GZCGQ01LM) in automations the option ""illuminance"".
There are 4 entities, but in the automation i only see 3.
At the beginning of 2021 the option was there, but now it's gone in the UI.
I can use the fuction ""illuminance"" if i use it manual in the YAML file.

And this are the options in automations:

### What is version of Home Assistant Core has the issue?
core-2021.5.4
### What was the last working version of Home Assistant Core?
_No response_
### What type of installation are you running?
Home Assistant Core
### Integration causing the issue
Automation
### Link to integration documentation on our website
https://www.home-assistant.io/integrations/device_automation/
### Example YAML snippet
_No response_
### Anything in the logs that might be useful for us?
_No response_
### Additional information
_No response_",1.0,"Missing illuminance in Automation (GZCGQ01LM) - ### The problem
I'm missing with Xiaomi (GZCGQ01LM) in automations the option ""illuminance"".
There are 4 entities, but in the automation i only see 3.
At the beginning of 2021 the option was there, but now it's gone in the UI.
I can use the fuction ""illuminance"" if i use it manual in the YAML file.

And this are the options in automations:

### What is version of Home Assistant Core has the issue?
core-2021.5.4
### What was the last working version of Home Assistant Core?
_No response_
### What type of installation are you running?
Home Assistant Core
### Integration causing the issue
Automation
### Link to integration documentation on our website
https://www.home-assistant.io/integrations/device_automation/
### Example YAML snippet
_No response_
### Anything in the logs that might be useful for us?
_No response_
### Additional information
_No response_",1,missing illuminance in automation the problem i m missing with xiaomi in automations the option illuminance there are entities but in the automation i only see at the beginning of the option was there but now it s gone in the ui i can use the fuction illuminance if i use it manual in the yaml file and this are the options in automations what is version of home assistant core has the issue core what was the last working version of home assistant core no response what type of installation are you running home assistant core integration causing the issue automation link to integration documentation on our website example yaml snippet no response anything in the logs that might be useful for us no response additional information no response ,1
138,4059294520.0,IssuesEvent,2016-05-25 09:04:09,eclipse/smarthome,https://api.github.com/repos/eclipse/smarthome,closed,New rule engine: Add JSON demo for new rule engine.,Automation,"The demo should create a module types, templates, modules and rules defined by JSON files.
We can use sample.handler for this purpose",1.0,"New rule engine: Add JSON demo for new rule engine. - The demo should create a module types, templates, modules and rules defined by JSON files.
We can use sample.handler for this purpose",1,new rule engine add json demo for new rule engine the demo should create a module types templates modules and rules defined by json files we can use sample handler for this purpose,1
740279,25741744891.0,IssuesEvent,2022-12-08 06:44:07,encorelab/ck-board,https://api.github.com/repos/encorelab/ck-board,opened,Create TODO message field and modal pop-up,enhancement high priority,"1. When creating or editing a TODO item, add a description field that can contain links (just like posts)
2. When item item is clicked open a modal pop-up for viewing the item, containing all fields
- title
- description
- type (value or ""None"")
- group (group name or ""None"")
- date",1.0,"Create TODO message field and modal pop-up - 1. When creating or editing a TODO item, add a description field that can contain links (just like posts)
2. When item item is clicked open a modal pop-up for viewing the item, containing all fields
- title
- description
- type (value or ""None"")
- group (group name or ""None"")
- date",0,create todo message field and modal pop up when creating or editing a todo item add a description field that can contain links just like posts img width alt screen shot at am src when item item is clicked open a modal pop up for viewing the item containing all fields title description type value or none group group name or none date,0
942,8781396552.0,IssuesEvent,2018-12-19 20:20:48,MicrosoftDocs/azure-docs,https://api.github.com/repos/MicrosoftDocs/azure-docs,closed,404 errors when clicking on links on this page,assigned-to-author automation/svc doc-bug triaged,"When on the page ""https://docs.microsoft.com/en-us/azure/automation/automation-connections#windows-powershell-cmdlets""
and clicking on links to the Cmdlets, you are taken to a 404 page.
For example ""Get-AzureRmAutomationConnection"" links to ""https://docs.microsoft.com/en-us/powershell/module/azurerm.automation/get-azurermautomationconnection""
In particular, for our business, we need a working link to the cmdlet ""Remove-AzureRmAutomationModule"".
The previous working link, ""https://docs.microsoft.com/en-us/powershell/module/azurerm.automation/remove-azurermautomationmodule"", now returns a 404 error as well.
---
#### Document Details
⚠ *Do not edit this section. It is required for docs.microsoft.com ➟ GitHub issue linking.*
* ID: 81c284ea-3656-836f-7eab-388773e0e382
* Version Independent ID: 71329bef-2d4f-4ff6-5a03-83b99b2269e9
* Content: [Connection assets in Azure Automation](https://docs.microsoft.com/en-us/azure/automation/automation-connections)
* Content Source: [articles/automation/automation-connections.md](https://github.com/Microsoft/azure-docs/blob/master/articles/automation/automation-connections.md)
* Service: **automation**
* GitHub Login: @georgewallace
* Microsoft Alias: **gwallace**",1.0,"404 errors when clicking on links on this page - When on the page ""https://docs.microsoft.com/en-us/azure/automation/automation-connections#windows-powershell-cmdlets""
and clicking on links to the Cmdlets, you are taken to a 404 page.
For example ""Get-AzureRmAutomationConnection"" links to ""https://docs.microsoft.com/en-us/powershell/module/azurerm.automation/get-azurermautomationconnection""
In particular, for our business, we need a working link to the cmdlet ""Remove-AzureRmAutomationModule"".
The previous working link, ""https://docs.microsoft.com/en-us/powershell/module/azurerm.automation/remove-azurermautomationmodule"", now returns a 404 error as well.
---
#### Document Details
⚠ *Do not edit this section. It is required for docs.microsoft.com ➟ GitHub issue linking.*
* ID: 81c284ea-3656-836f-7eab-388773e0e382
* Version Independent ID: 71329bef-2d4f-4ff6-5a03-83b99b2269e9
* Content: [Connection assets in Azure Automation](https://docs.microsoft.com/en-us/azure/automation/automation-connections)
* Content Source: [articles/automation/automation-connections.md](https://github.com/Microsoft/azure-docs/blob/master/articles/automation/automation-connections.md)
* Service: **automation**
* GitHub Login: @georgewallace
* Microsoft Alias: **gwallace**",1, errors when clicking on links on this page when on the page and clicking on links to the cmdlets you are taken to a page for example get azurermautomationconnection links to in particular for our business we need a working link to the cmdlet remove azurermautomationmodule the previous working link now returns a error as well document details ⚠ do not edit this section it is required for docs microsoft com ➟ github issue linking id version independent id content content source service automation github login georgewallace microsoft alias gwallace ,1
3436,13765494911.0,IssuesEvent,2020-10-07 13:29:08,home-assistant/core,https://api.github.com/repos/home-assistant/core,closed,Automation: Holding state trigger breaks on automation reloading,integration: automation,"
## The problem
Hi! I noticed that a holding state trigger (i.e. with `for` condition) does not fire if the `automation.reload` service is called when the entity is already in target state but the time has not passed yet. For example, consider a simple heater that is turned on and off every minute. If I then edit any automation via UI (even an unrelated automation), it will implicitly call the `automation.reload` service, and the heater will stay in one of the states forever (or until I toggle it manually), which could potentially lead to disaster results (freeze or overheat something).
## Environment
- Home Assistant Core release with the issue: 0.115
- Last working Home Assistant Core release (if known): N/A
- Operating environment (OS/Container/Supervised/Core): OpenWRT, Python 3.7.8
- Integration causing this issue: automation
- Link to integration documentation on our website: https://www.home-assistant.io/docs/automation/trigger/
## Problem-relevant `configuration.yaml`
```yaml
automation:
- trigger:
platform: state
entity_id: switch.heater
to: ""on""
for: ""00:01:00""
action:
service: switch.turn_off
entity_id: switch.heater
- trigger:
platform: state
entity_id: switch.heater
to: ""off""
for: ""00:01:00""
action:
service: switch.turn_on
entity_id: switch.heater
```
## Traceback/Error logs
```txt
```
## Additional information
On load, the automation should lookup history to find out the remaining time. At the very least, it should restart the timer from scratch. It should also lookup its own history in order to avoid triggering twice on the same event, but it should never miss an action even if I reload the automations just a millisecond before it would have fired. Same should apply to HA restarts in the ideal world.",1.0,"Automation: Holding state trigger breaks on automation reloading -
## The problem
Hi! I noticed that a holding state trigger (i.e. with `for` condition) does not fire if the `automation.reload` service is called when the entity is already in target state but the time has not passed yet. For example, consider a simple heater that is turned on and off every minute. If I then edit any automation via UI (even an unrelated automation), it will implicitly call the `automation.reload` service, and the heater will stay in one of the states forever (or until I toggle it manually), which could potentially lead to disaster results (freeze or overheat something).
## Environment
- Home Assistant Core release with the issue: 0.115
- Last working Home Assistant Core release (if known): N/A
- Operating environment (OS/Container/Supervised/Core): OpenWRT, Python 3.7.8
- Integration causing this issue: automation
- Link to integration documentation on our website: https://www.home-assistant.io/docs/automation/trigger/
## Problem-relevant `configuration.yaml`
```yaml
automation:
- trigger:
platform: state
entity_id: switch.heater
to: ""on""
for: ""00:01:00""
action:
service: switch.turn_off
entity_id: switch.heater
- trigger:
platform: state
entity_id: switch.heater
to: ""off""
for: ""00:01:00""
action:
service: switch.turn_on
entity_id: switch.heater
```
## Traceback/Error logs
```txt
```
## Additional information
On load, the automation should lookup history to find out the remaining time. At the very least, it should restart the timer from scratch. It should also lookup its own history in order to avoid triggering twice on the same event, but it should never miss an action even if I reload the automations just a millisecond before it would have fired. Same should apply to HA restarts in the ideal world.",1,automation holding state trigger breaks on automation reloading read this first if you need additional help with this template please refer to make sure you are running the latest version of home assistant before reporting an issue do not report issues for integrations if you are using custom components or integrations provide as many details as possible paste logs configuration samples and code into the backticks do not delete any text from this template otherwise your issue may be closed without comment the problem describe the issue you are experiencing here to communicate to the maintainers tell us what you were trying to do and what happened hi i noticed that a holding state trigger i e with for condition does not fire if the automation reload service is called when the entity is already in target state but the time has not passed yet for example consider a simple heater that is turned on and off every minute if i then edit any automation via ui even an unrelated automation it will implicitly call the automation reload service and the heater will stay in one of the states forever or until i toggle it manually which could potentially lead to disaster results freeze or overheat something environment provide details about the versions you are using which helps us to reproduce and find the issue quicker version information is found in the home assistant frontend configuration info home assistant core release with the issue last working home assistant core release if known n a operating environment os container supervised core openwrt python integration causing this issue automation link to integration documentation on our website problem relevant configuration yaml an example configuration that caused the problem for you fill this out even if it seems unimportant to you please be sure to remove personal information like passwords private urls and other credentials yaml automation trigger platform state entity id switch heater to on for action service switch turn off entity id switch heater trigger platform state entity id switch heater to off for action service switch turn on entity id switch heater traceback error logs if you come across any trace or error logs please provide them txt additional information on load the automation should lookup history to find out the remaining time at the very least it should restart the timer from scratch it should also lookup its own history in order to avoid triggering twice on the same event but it should never miss an action even if i reload the automations just a millisecond before it would have fired same should apply to ha restarts in the ideal world ,1
6761,23865171437.0,IssuesEvent,2022-09-07 10:22:41,smcnab1/op-question-mark,https://api.github.com/repos/smcnab1/op-question-mark,opened,[FR] Implement Bed & Presence Detection in Automations,Status: Confirmed Type: Feature Priority: Low For: Automations,"**Bed Sensors**
- [ ] Implement in turning off automations
- [ ] Implement in turning on automations
- [ ] Implement in managing security automations
- [ ] Set up security automation for overnight
**Presence Sensor**
- [ ] Implement maintaining light automation until presence moves room",1.0,"[FR] Implement Bed & Presence Detection in Automations - **Bed Sensors**
- [ ] Implement in turning off automations
- [ ] Implement in turning on automations
- [ ] Implement in managing security automations
- [ ] Set up security automation for overnight
**Presence Sensor**
- [ ] Implement maintaining light automation until presence moves room",1, implement bed presence detection in automations bed sensors implement in turning off automations implement in turning on automations implement in managing security automations set up security automation for overnight presence sensor implement maintaining light automation until presence moves room,1
5086,18530515822.0,IssuesEvent,2021-10-21 04:59:56,astropy/astropy,https://api.github.com/repos/astropy/astropy,closed,MNT: Have a bot to auto-backport as PR,Feature Request needs-discussion dev-automation,"When a PR against `master` is merged, it would be desirable to have a bot to automatically open up follow-up PR(s) to backport changes that are just merged against older release branch(es), depending on the relevant PR milestone.
If the automatic backport PR(s) encounter difficulties, such as failed CI or conflicts, the bot should then take follow-up actions (apply special labels or create comments) so that manual intervention can be done.
If possible, we should not ""roll our own,"" but rather look at how other major projects are doing their backports, and see how we can reuse existing solutions. If all fails, @Cadair said he has a special hack that works ""half the time""...
It is something we should aim for sooner than later, so I am going to add a milestone to this issue.",1.0,"MNT: Have a bot to auto-backport as PR - When a PR against `master` is merged, it would be desirable to have a bot to automatically open up follow-up PR(s) to backport changes that are just merged against older release branch(es), depending on the relevant PR milestone.
If the automatic backport PR(s) encounter difficulties, such as failed CI or conflicts, the bot should then take follow-up actions (apply special labels or create comments) so that manual intervention can be done.
If possible, we should not ""roll our own,"" but rather look at how other major projects are doing their backports, and see how we can reuse existing solutions. If all fails, @Cadair said he has a special hack that works ""half the time""...
It is something we should aim for sooner than later, so I am going to add a milestone to this issue.",1,mnt have a bot to auto backport as pr when a pr against master is merged it would be desirable to have a bot to automatically open up follow up pr s to backport changes that are just merged against older release branch es depending on the relevant pr milestone if the automatic backport pr s encounter difficulties such as failed ci or conflicts the bot should then take follow up actions apply special labels or create comments so that manual intervention can be done if possible we should not roll our own but rather look at how other major projects are doing their backports and see how we can reuse existing solutions if all fails cadair said he has a special hack that works half the time it is something we should aim for sooner than later so i am going to add a milestone to this issue ,1
96561,12139371273.0,IssuesEvent,2020-04-23 18:44:54,solex2006/SELIProject,https://api.github.com/repos/solex2006/SELIProject,opened,Error Suggestion,1 - Planning Feature Design Notes :notebook: discussion,"This is a ""not-end"" requirement. What I want to mean is that this should be considered for any new page you create that Student can access. Also should correct the old pages.
I will use the label **Feature Design Notes** to this cases
***************
If an input error is automatically detected and suggestions for correction are known, then the suggestions are provided to the user, unless it would jeopardize the security or purpose of the content.
### Examples
#### Example:
# :beetle: Test Procedures
###
Expected Results:
# :busts_in_silhouette: Benefits",1.0,"Error Suggestion - This is a ""not-end"" requirement. What I want to mean is that this should be considered for any new page you create that Student can access. Also should correct the old pages.
I will use the label **Feature Design Notes** to this cases
***************
If an input error is automatically detected and suggestions for correction are known, then the suggestions are provided to the user, unless it would jeopardize the security or purpose of the content.
### Examples
#### Example:
# :beetle: Test Procedures
###
Expected Results:
# :busts_in_silhouette: Benefits",0,error suggestion this is a not end requirement what i want to mean is that this should be considered for any new page you create that student can access also should correct the old pages i will use the label feature design notes to this cases if an input error is automatically detected and suggestions for correction are known then the suggestions are provided to the user unless it would jeopardize the security or purpose of the content examples example beetle test procedures expected results busts in silhouette benefits,0
365640,25545815629.0,IssuesEvent,2022-11-29 18:42:11,PhilanthropyDataCommons/service,https://api.github.com/repos/PhilanthropyDataCommons/service,closed,Setup instructions omit a necessary `.env.test` change,documentation,"I just followed our [setup instructions](https://github.com/PhilanthropyDataCommons/service#setup) mostly-successfully. The only hiccup was when I ran tests for the first time and received multiple errors with the same block of failing code:
```
● Test suite failed to run
ENOENT: no such file or directory, open 'secret_api_keys.txt'
2 |
3 | const validKeysFile = process.env.API_KEYS_FILE ?? 'test_keys.txt';
> 4 | const data = fs.readFileSync(validKeysFile, 'utf8').split('\n');
| ^
5 | export const dummyApiKey = { 'x-api-key': data[0] };
6 |
at Object. (src/test/dummyApiKey.ts:4:17)
```
Skipping down to the [API Keys section](https://github.com/PhilanthropyDataCommons/service#setup) made it clear I should have set `API_KEYS_FILE=test_keys.txt` in `.env.test`. We should make that explicit in the setup block.",1.0,"Setup instructions omit a necessary `.env.test` change - I just followed our [setup instructions](https://github.com/PhilanthropyDataCommons/service#setup) mostly-successfully. The only hiccup was when I ran tests for the first time and received multiple errors with the same block of failing code:
```
● Test suite failed to run
ENOENT: no such file or directory, open 'secret_api_keys.txt'
2 |
3 | const validKeysFile = process.env.API_KEYS_FILE ?? 'test_keys.txt';
> 4 | const data = fs.readFileSync(validKeysFile, 'utf8').split('\n');
| ^
5 | export const dummyApiKey = { 'x-api-key': data[0] };
6 |
at Object. (src/test/dummyApiKey.ts:4:17)
```
Skipping down to the [API Keys section](https://github.com/PhilanthropyDataCommons/service#setup) made it clear I should have set `API_KEYS_FILE=test_keys.txt` in `.env.test`. We should make that explicit in the setup block.",0,setup instructions omit a necessary env test change i just followed our mostly successfully the only hiccup was when i ran tests for the first time and received multiple errors with the same block of failing code ● test suite failed to run enoent no such file or directory open secret api keys txt const validkeysfile process env api keys file test keys txt const data fs readfilesync validkeysfile split n export const dummyapikey x api key data at object src test dummyapikey ts skipping down to the made it clear i should have set api keys file test keys txt in env test we should make that explicit in the setup block ,0
10459,26992473992.0,IssuesEvent,2023-02-09 21:08:58,MicrosoftDocs/architecture-center,https://api.github.com/repos/MicrosoftDocs/architecture-center,closed,Add numbers to diagram,assigned-to-author triaged architecture-center/svc example-scenario/subsvc Pri1,"there is a numerical listing below the diagram, it would be helpful if the diagram showed the numbers so that the text descriptions could be more easily correlated.
---
#### Document Details
⚠ *Do not edit this section. It is required for docs.microsoft.com ➟ GitHub issue linking.*
* ID: d2c472ec-9be4-6e0e-eac6-0236e2e1044a
* Version Independent ID: d2c472ec-9be4-6e0e-eac6-0236e2e1044a
* Content: [Network-hardened web app - Azure Example Scenarios](https://docs.microsoft.com/en-us/azure/architecture/example-scenario/security/hardened-web-app)
* Content Source: [docs/example-scenario/security/hardened-web-app.yml](https://github.com/microsoftdocs/architecture-center/blob/main/docs/example-scenario/security/hardened-web-app.yml)
* Service: **architecture-center**
* Sub-service: **example-scenario**
* GitHub Login: @damaccar
* Microsoft Alias: **damaccar**",1.0,"Add numbers to diagram - there is a numerical listing below the diagram, it would be helpful if the diagram showed the numbers so that the text descriptions could be more easily correlated.
---
#### Document Details
⚠ *Do not edit this section. It is required for docs.microsoft.com ➟ GitHub issue linking.*
* ID: d2c472ec-9be4-6e0e-eac6-0236e2e1044a
* Version Independent ID: d2c472ec-9be4-6e0e-eac6-0236e2e1044a
* Content: [Network-hardened web app - Azure Example Scenarios](https://docs.microsoft.com/en-us/azure/architecture/example-scenario/security/hardened-web-app)
* Content Source: [docs/example-scenario/security/hardened-web-app.yml](https://github.com/microsoftdocs/architecture-center/blob/main/docs/example-scenario/security/hardened-web-app.yml)
* Service: **architecture-center**
* Sub-service: **example-scenario**
* GitHub Login: @damaccar
* Microsoft Alias: **damaccar**",0,add numbers to diagram there is a numerical listing below the diagram it would be helpful if the diagram showed the numbers so that the text descriptions could be more easily correlated document details ⚠ do not edit this section it is required for docs microsoft com ➟ github issue linking id version independent id content content source service architecture center sub service example scenario github login damaccar microsoft alias damaccar ,0
1657,10542530407.0,IssuesEvent,2019-10-02 13:22:06,elastic/apm-integration-testing,https://api.github.com/repos/elastic/apm-integration-testing,closed,apm-server + logstash + ILM,automation subtask,"When using logstash to index apm-server output while ILM is enabled, this change is needed:
```patch
diff --git a/docker/logstash/pipeline/apm.conf b/docker/logstash/pipeline/apm.conf
index 1db7fdc..31fc1df 100644
--- a/docker/logstash/pipeline/apm.conf
+++ b/docker/logstash/pipeline/apm.conf
@@ -34,6 +34,6 @@ filter {
output {
elasticsearch {
hosts => [""elasticsearch:9200""]
- index => ""%{[@metadata][beat]}-%{[@metadata][version]}%{[@metadata][index_suffix]}-%{+YYYY.MM.dd}""
+ index => ""%{[@metadata][beat]}-%{[@metadata][version]}%{[@metadata][index_suffix]}""
}
}
```
it would also be nice to include `pipeline => ""apm""` for versions that install that pipeline.",1.0,"apm-server + logstash + ILM - When using logstash to index apm-server output while ILM is enabled, this change is needed:
```patch
diff --git a/docker/logstash/pipeline/apm.conf b/docker/logstash/pipeline/apm.conf
index 1db7fdc..31fc1df 100644
--- a/docker/logstash/pipeline/apm.conf
+++ b/docker/logstash/pipeline/apm.conf
@@ -34,6 +34,6 @@ filter {
output {
elasticsearch {
hosts => [""elasticsearch:9200""]
- index => ""%{[@metadata][beat]}-%{[@metadata][version]}%{[@metadata][index_suffix]}-%{+YYYY.MM.dd}""
+ index => ""%{[@metadata][beat]}-%{[@metadata][version]}%{[@metadata][index_suffix]}""
}
}
```
it would also be nice to include `pipeline => ""apm""` for versions that install that pipeline.",1,apm server logstash ilm when using logstash to index apm server output while ilm is enabled this change is needed patch diff git a docker logstash pipeline apm conf b docker logstash pipeline apm conf index a docker logstash pipeline apm conf b docker logstash pipeline apm conf filter output elasticsearch hosts index yyyy mm dd index it would also be nice to include pipeline apm for versions that install that pipeline ,1
620,7549323598.0,IssuesEvent,2018-04-18 13:58:04,MicrosoftDocs/azure-docs,https://api.github.com/repos/MicrosoftDocs/azure-docs,closed,Multiple Subscription Update management ,assigned-to-author automation product-question triaged,"I have Multiple subscription (Prod,Dev,Test),now am in confusion of how to use Azure Update management across the subscription.
Is that possible to have single Azure Automation account enabled with Update management for all my subscription or do i need to have Azure Automation account enabled with Update management in All my subscription.
---
#### Document Details
⚠ *Do not edit this section. It is required for docs.microsoft.com ➟ GitHub issue linking.*
* ID: c3461048-c7fc-3979-a818-39af99d5e6bb
* Version Independent ID: d0e5e766-ef63-d934-b21b-678933a5cc65
* Content: [Manage updates and patches for your Azure Windows VMs](https://docs.microsoft.com/en-us/azure/automation/automation-tutorial-update-management)
* Content Source: [articles/automation/automation-tutorial-update-management.md](https://github.com/Microsoft/azure-docs/blob/master/articles/automation/automation-tutorial-update-management.md)
* Service: **automation**
* GitHub Login: @zjalexander
* Microsoft Alias: **zachal**",1.0,"Multiple Subscription Update management - I have Multiple subscription (Prod,Dev,Test),now am in confusion of how to use Azure Update management across the subscription.
Is that possible to have single Azure Automation account enabled with Update management for all my subscription or do i need to have Azure Automation account enabled with Update management in All my subscription.
---
#### Document Details
⚠ *Do not edit this section. It is required for docs.microsoft.com ➟ GitHub issue linking.*
* ID: c3461048-c7fc-3979-a818-39af99d5e6bb
* Version Independent ID: d0e5e766-ef63-d934-b21b-678933a5cc65
* Content: [Manage updates and patches for your Azure Windows VMs](https://docs.microsoft.com/en-us/azure/automation/automation-tutorial-update-management)
* Content Source: [articles/automation/automation-tutorial-update-management.md](https://github.com/Microsoft/azure-docs/blob/master/articles/automation/automation-tutorial-update-management.md)
* Service: **automation**
* GitHub Login: @zjalexander
* Microsoft Alias: **zachal**",1,multiple subscription update management i have multiple subscription prod dev test now am in confusion of how to use azure update management across the subscription is that possible to have single azure automation account enabled with update management for all my subscription or do i need to have azure automation account enabled with update management in all my subscription document details ⚠ do not edit this section it is required for docs microsoft com ➟ github issue linking id version independent id content content source service automation github login zjalexander microsoft alias zachal ,1
40874,6875291691.0,IssuesEvent,2017-11-19 12:14:18,junit-team/junit5,https://api.github.com/repos/junit-team/junit5,closed,Update Asciidoctor PDF backend of the user-guide,status: blocked theme: documentation,"## Overview
The PDF backend of is disabled at the moment, as does not work with Java 9.
See `documentation/documentation.gradle:83` for backend configuration.
See https://github.com/jruby/jruby/issues/4805 for the underlying issue.
## Deliverables
- [ ] Enable PDF backend when https://github.com/jruby/jruby/issues/4805 is solved and JRuby **9.1.14** is released.
",1.0,"Update Asciidoctor PDF backend of the user-guide - ## Overview
The PDF backend of is disabled at the moment, as does not work with Java 9.
See `documentation/documentation.gradle:83` for backend configuration.
See https://github.com/jruby/jruby/issues/4805 for the underlying issue.
## Deliverables
- [ ] Enable PDF backend when https://github.com/jruby/jruby/issues/4805 is solved and JRuby **9.1.14** is released.
",0,update asciidoctor pdf backend of the user guide overview the pdf backend of is disabled at the moment as does not work with java see documentation documentation gradle for backend configuration see for the underlying issue deliverables enable pdf backend when is solved and jruby is released ,0
450467,31925861097.0,IssuesEvent,2023-09-19 01:40:50,vercel/next.js,https://api.github.com/repos/vercel/next.js,closed,Docs: Get static paths fallback value issue,template: documentation,"### What is the improvement or update you wish to see?
There is an error with example od Dynamic Routes fallback value for Pages Router (value `true` is not working, giving build errors).
### Is there any context that might help us understand?
Reproducing guide:
1. Clone [this repo](https://github.com/z4nr34l/nextjs-preprender-reproduce.git)
2. Run `next build` or `pnpm run build` inside
3. Watch it giving SSG errors
- [x] I'll prepare PR shortly to fix that.
### Does the docs page already exist? Please link to it.
https://nextjs.org/docs/pages/building-your-application/data-fetching/get-static-paths",1.0,"Docs: Get static paths fallback value issue - ### What is the improvement or update you wish to see?
There is an error with example od Dynamic Routes fallback value for Pages Router (value `true` is not working, giving build errors).
### Is there any context that might help us understand?
Reproducing guide:
1. Clone [this repo](https://github.com/z4nr34l/nextjs-preprender-reproduce.git)
2. Run `next build` or `pnpm run build` inside
3. Watch it giving SSG errors
- [x] I'll prepare PR shortly to fix that.
### Does the docs page already exist? Please link to it.
https://nextjs.org/docs/pages/building-your-application/data-fetching/get-static-paths",0,docs get static paths fallback value issue what is the improvement or update you wish to see there is an error with example od dynamic routes fallback value for pages router value true is not working giving build errors is there any context that might help us understand reproducing guide clone run next build or pnpm run build inside watch it giving ssg errors i ll prepare pr shortly to fix that does the docs page already exist please link to it ,0
1841,10924371270.0,IssuesEvent,2019-11-22 09:59:41,sourcegraph/sourcegraph,https://api.github.com/repos/sourcegraph/sourcegraph,closed,Replacer service is never started by server in the docker image,automation bug,"`replacer` is missing in the server globals:
https://sourcegraph.com/github.com/sourcegraph/sourcegraph@master/-/blob/cmd/server/shared/globals.go#L12:5",1.0,"Replacer service is never started by server in the docker image - `replacer` is missing in the server globals:
https://sourcegraph.com/github.com/sourcegraph/sourcegraph@master/-/blob/cmd/server/shared/globals.go#L12:5",1,replacer service is never started by server in the docker image replacer is missing in the server globals ,1
161491,12546168066.0,IssuesEvent,2020-06-05 20:13:07,pytorch/pytorch,https://api.github.com/repos/pytorch/pytorch,closed,DISABLED test_backward_node_failure (__main__.TensorPipeAgentDistAutogradTestWithSpawn),high priority module: rpc module: tensorpipe topic: flaky-tests triage review triaged,"
https://app.circleci.com/pipelines/github/pytorch/pytorch/176463/workflows/013c36ff-c568-4726-a10f-fc6fc342ac0c/jobs/5670885/steps
```
Jun 03 17:38:08 test_backward_node_failure (__main__.TensorPipeAgentDistAutogradTestWithSpawn) ... [W tensorpipe_agent.cpp:312] RPC agent for worker3 encountered error when reading incoming request: pipe closed
Jun 03 17:38:08 [E container.cpp:248] Could not release Dist Autograd Context on node 0: pipe closed
Jun 03 17:38:08 [W tensorpipe_agent.cpp:312] RPC agent for worker0 encountered error when reading incoming request: EOF: end of file
Jun 03 17:38:08 [W tensorpipe_agent.cpp:280] RPC agent for worker0 encountered error when writing outgoing response: EOF: end of file
Jun 03 17:38:08 [W tensorpipe_agent.cpp:436] RPC agent for worker0 encountered error when writing outgoing request: ECONNREFUSED: connection refused
Jun 03 17:38:08 [E container.cpp:248] Could not release Dist Autograd Context on node 2: pipe closed
Jun 03 17:38:08 [W tensorpipe_agent.cpp:312] RPC agent for worker1 encountered error when reading incoming request: pipe closed
Jun 03 17:38:08 [W tensorpipe_agent.cpp:312] RPC agent for worker2 encountered error when reading incoming request: EOF: end of file
Jun 03 17:38:08 [W tensorpipe_agent.cpp:436] RPC agent for worker2 encountered error when writing outgoing request: ECONNREFUSED: connection refused
Jun 03 17:38:08 [W tensorpipe_agent.cpp:280] RPC agent for worker2 encountered error when writing outgoing response: EOF: end of file
Jun 03 17:38:08 [W tensorpipe_agent.cpp:436] RPC agent for worker2 encountered error when writing outgoing request: ECONNREFUSED: connection refused
Jun 03 17:38:08 [W tensorpipe_agent.cpp:453] RPC agent for worker0 encountered error when reading incoming request: EOF: end of file
Jun 03 17:38:08 [W tensorpipe_agent.cpp:436] RPC agent for worker2 encountered error when writing outgoing request: EPIPE: broken pipe
Jun 03 17:38:08 [W tensorpipe_agent.cpp:436] RPC agent for worker2 encountered error when writing outgoing request: EPIPE: broken pipe
Jun 03 17:38:08 [W tensorpipe_agent.cpp:436] RPC agent for worker0 encountered error when writing outgoing request: ECONNREFUSED: connection refused
Jun 03 17:38:08 [W tensorpipe_agent.cpp:436] RPC agent for worker2 encountered error when writing outgoing request: ECONNREFUSED: connection refused
Jun 03 17:38:08 [W tensorpipe_agent.cpp:436] RPC agent for worker2 encountered error when writing outgoing request: EPIPE: broken pipe
Jun 03 17:38:08 [W tensorpipe_agent.cpp:436] RPC agent for worker0 encountered error when writing outgoing request: EOF: end of file
Jun 03 17:38:08 [W tensorpipe_agent.cpp:436] RPC agent for worker0 encountered error when writing outgoing request: ECONNREFUSED: connection refused
Jun 03 17:38:08 [W tensorpipe_agent.cpp:436] RPC agent for worker0 encountered error when writing outgoing request: EOF: end of file
Jun 03 17:39:47 Timing out after 100 seconds and killing subprocesses.
Jun 03 17:39:47 ERROR (100.070s)
```
```
Jun 03 17:43:08 ======================================================================
Jun 03 17:43:08 ERROR [100.070s]: test_backward_node_failure (__main__.TensorPipeAgentDistAutogradTestWithSpawn)
Jun 03 17:43:08 ----------------------------------------------------------------------
Jun 03 17:43:08 Traceback (most recent call last):
Jun 03 17:43:08 File ""/opt/conda/lib/python3.8/site-packages/torch/testing/_internal/common_distributed.py"", line 204, in wrapper
Jun 03 17:43:08 self._join_processes(fn)
Jun 03 17:43:08 File ""/opt/conda/lib/python3.8/site-packages/torch/testing/_internal/common_distributed.py"", line 306, in _join_processes
Jun 03 17:43:08 self._check_return_codes(elapsed_time)
Jun 03 17:43:08 File ""/opt/conda/lib/python3.8/site-packages/torch/testing/_internal/common_distributed.py"", line 344, in _check_return_codes
Jun 03 17:43:08 raise RuntimeError('Process {} terminated or timed out after {} seconds'.format(i, elapsed_time))
Jun 03 17:43:08 RuntimeError: Process 2 terminated or timed out after 100.05238389968872 seconds
```
cc @ezyang @gchanan @zou3519 @pietern @mrshenli @pritamdamania87 @zhaojuanmao @satgera @gqchen @aazzolini @rohan-varma @xush6528 @jjlilley @osalpekar @jiayisuse @lw @beauby",1.0,"DISABLED test_backward_node_failure (__main__.TensorPipeAgentDistAutogradTestWithSpawn) -
https://app.circleci.com/pipelines/github/pytorch/pytorch/176463/workflows/013c36ff-c568-4726-a10f-fc6fc342ac0c/jobs/5670885/steps
```
Jun 03 17:38:08 test_backward_node_failure (__main__.TensorPipeAgentDistAutogradTestWithSpawn) ... [W tensorpipe_agent.cpp:312] RPC agent for worker3 encountered error when reading incoming request: pipe closed
Jun 03 17:38:08 [E container.cpp:248] Could not release Dist Autograd Context on node 0: pipe closed
Jun 03 17:38:08 [W tensorpipe_agent.cpp:312] RPC agent for worker0 encountered error when reading incoming request: EOF: end of file
Jun 03 17:38:08 [W tensorpipe_agent.cpp:280] RPC agent for worker0 encountered error when writing outgoing response: EOF: end of file
Jun 03 17:38:08 [W tensorpipe_agent.cpp:436] RPC agent for worker0 encountered error when writing outgoing request: ECONNREFUSED: connection refused
Jun 03 17:38:08 [E container.cpp:248] Could not release Dist Autograd Context on node 2: pipe closed
Jun 03 17:38:08 [W tensorpipe_agent.cpp:312] RPC agent for worker1 encountered error when reading incoming request: pipe closed
Jun 03 17:38:08 [W tensorpipe_agent.cpp:312] RPC agent for worker2 encountered error when reading incoming request: EOF: end of file
Jun 03 17:38:08 [W tensorpipe_agent.cpp:436] RPC agent for worker2 encountered error when writing outgoing request: ECONNREFUSED: connection refused
Jun 03 17:38:08 [W tensorpipe_agent.cpp:280] RPC agent for worker2 encountered error when writing outgoing response: EOF: end of file
Jun 03 17:38:08 [W tensorpipe_agent.cpp:436] RPC agent for worker2 encountered error when writing outgoing request: ECONNREFUSED: connection refused
Jun 03 17:38:08 [W tensorpipe_agent.cpp:453] RPC agent for worker0 encountered error when reading incoming request: EOF: end of file
Jun 03 17:38:08 [W tensorpipe_agent.cpp:436] RPC agent for worker2 encountered error when writing outgoing request: EPIPE: broken pipe
Jun 03 17:38:08 [W tensorpipe_agent.cpp:436] RPC agent for worker2 encountered error when writing outgoing request: EPIPE: broken pipe
Jun 03 17:38:08 [W tensorpipe_agent.cpp:436] RPC agent for worker0 encountered error when writing outgoing request: ECONNREFUSED: connection refused
Jun 03 17:38:08 [W tensorpipe_agent.cpp:436] RPC agent for worker2 encountered error when writing outgoing request: ECONNREFUSED: connection refused
Jun 03 17:38:08 [W tensorpipe_agent.cpp:436] RPC agent for worker2 encountered error when writing outgoing request: EPIPE: broken pipe
Jun 03 17:38:08 [W tensorpipe_agent.cpp:436] RPC agent for worker0 encountered error when writing outgoing request: EOF: end of file
Jun 03 17:38:08 [W tensorpipe_agent.cpp:436] RPC agent for worker0 encountered error when writing outgoing request: ECONNREFUSED: connection refused
Jun 03 17:38:08 [W tensorpipe_agent.cpp:436] RPC agent for worker0 encountered error when writing outgoing request: EOF: end of file
Jun 03 17:39:47 Timing out after 100 seconds and killing subprocesses.
Jun 03 17:39:47 ERROR (100.070s)
```
```
Jun 03 17:43:08 ======================================================================
Jun 03 17:43:08 ERROR [100.070s]: test_backward_node_failure (__main__.TensorPipeAgentDistAutogradTestWithSpawn)
Jun 03 17:43:08 ----------------------------------------------------------------------
Jun 03 17:43:08 Traceback (most recent call last):
Jun 03 17:43:08 File ""/opt/conda/lib/python3.8/site-packages/torch/testing/_internal/common_distributed.py"", line 204, in wrapper
Jun 03 17:43:08 self._join_processes(fn)
Jun 03 17:43:08 File ""/opt/conda/lib/python3.8/site-packages/torch/testing/_internal/common_distributed.py"", line 306, in _join_processes
Jun 03 17:43:08 self._check_return_codes(elapsed_time)
Jun 03 17:43:08 File ""/opt/conda/lib/python3.8/site-packages/torch/testing/_internal/common_distributed.py"", line 344, in _check_return_codes
Jun 03 17:43:08 raise RuntimeError('Process {} terminated or timed out after {} seconds'.format(i, elapsed_time))
Jun 03 17:43:08 RuntimeError: Process 2 terminated or timed out after 100.05238389968872 seconds
```
cc @ezyang @gchanan @zou3519 @pietern @mrshenli @pritamdamania87 @zhaojuanmao @satgera @gqchen @aazzolini @rohan-varma @xush6528 @jjlilley @osalpekar @jiayisuse @lw @beauby",0,disabled test backward node failure main tensorpipeagentdistautogradtestwithspawn jun test backward node failure main tensorpipeagentdistautogradtestwithspawn rpc agent for encountered error when reading incoming request pipe closed jun could not release dist autograd context on node pipe closed jun rpc agent for encountered error when reading incoming request eof end of file jun rpc agent for encountered error when writing outgoing response eof end of file jun rpc agent for encountered error when writing outgoing request econnrefused connection refused jun could not release dist autograd context on node pipe closed jun rpc agent for encountered error when reading incoming request pipe closed jun rpc agent for encountered error when reading incoming request eof end of file jun rpc agent for encountered error when writing outgoing request econnrefused connection refused jun rpc agent for encountered error when writing outgoing response eof end of file jun rpc agent for encountered error when writing outgoing request econnrefused connection refused jun rpc agent for encountered error when reading incoming request eof end of file jun rpc agent for encountered error when writing outgoing request epipe broken pipe jun rpc agent for encountered error when writing outgoing request epipe broken pipe jun rpc agent for encountered error when writing outgoing request econnrefused connection refused jun rpc agent for encountered error when writing outgoing request econnrefused connection refused jun rpc agent for encountered error when writing outgoing request epipe broken pipe jun rpc agent for encountered error when writing outgoing request eof end of file jun rpc agent for encountered error when writing outgoing request econnrefused connection refused jun rpc agent for encountered error when writing outgoing request eof end of file jun timing out after seconds and killing subprocesses jun error jun jun error test backward node failure main tensorpipeagentdistautogradtestwithspawn jun jun traceback most recent call last jun file opt conda lib site packages torch testing internal common distributed py line in wrapper jun self join processes fn jun file opt conda lib site packages torch testing internal common distributed py line in join processes jun self check return codes elapsed time jun file opt conda lib site packages torch testing internal common distributed py line in check return codes jun raise runtimeerror process terminated or timed out after seconds format i elapsed time jun runtimeerror process terminated or timed out after seconds cc ezyang gchanan pietern mrshenli zhaojuanmao satgera gqchen aazzolini rohan varma jjlilley osalpekar jiayisuse lw beauby,0
251581,8017426787.0,IssuesEvent,2018-07-25 15:53:39,CARLI/vufind,https://api.github.com/repos/CARLI/vufind,closed,"Remove color of former ""Live Status Unavailable"" gray box on results page",Accepted Ready for Prod priority issue,"In #161 we decided to remove the wording from the ""Live Status Unavailable"" box on the results page because it was confusing and didn't add any value. However, that will leave us with a small, gray box (see example in 11 and 12 in screenshot below). Can we remove the color from that box or hide it?

The CSS that controls that gray color appears to be:
< span class=""status"">
< span class=""label label-default"">< /span>
< /span>
.label-default {
background-color: #777;
}
",1.0,"Remove color of former ""Live Status Unavailable"" gray box on results page - In #161 we decided to remove the wording from the ""Live Status Unavailable"" box on the results page because it was confusing and didn't add any value. However, that will leave us with a small, gray box (see example in 11 and 12 in screenshot below). Can we remove the color from that box or hide it?

The CSS that controls that gray color appears to be:
< span class=""status"">
< span class=""label label-default"">< /span>
< /span>
.label-default {
background-color: #777;
}
",0,remove color of former live status unavailable gray box on results page in we decided to remove the wording from the live status unavailable box on the results page because it was confusing and didn t add any value however that will leave us with a small gray box see example in and in screenshot below can we remove the color from that box or hide it the css that controls that gray color appears to be label default background color ,0
529773,15395204320.0,IssuesEvent,2021-03-03 18:54:41,longhorn/longhorn,https://api.github.com/repos/longhorn/longhorn,closed,[BUG] Updating the config map in the Longhorn yaml doesn't set the values of the settings.,priority/2 wontfix,"**Describe the bug**
Deploy Longhorn-master after setting some values like `concurrent-automatic-engine-upgrade-per-node-limit` in the Config map in the Longhorn yaml file. The values are not reflected the Longhorn setting once deployed.
**To Reproduce**
Steps to reproduce the behavior:
1. Deploy Longhorn v1.1.0 on a K8s cluster.
2. Create some volumes and attach them to pods.
3. Change the `concurrent-automatic-engine-upgrade-per-node-limit` value to 2 in the config map. Change some more values in the Config map like below.
```
apiVersion: v1
kind: ConfigMap
metadata:
name: longhorn-default-setting
namespace: longhorn-system
data:
default-setting.yaml: |-
backup-target:
backup-target-credential-secret:
allow-recurring-job-while-volume-detached:
create-default-disk-labeled-nodes:
default-data-path:
replica-soft-anti-affinity:
storage-over-provisioning-percentage:
storage-minimal-available-percentage:
upgrade-checker:
default-replica-count:
default-data-locality:
guaranteed-engine-cpu:
default-longhorn-static-storage-class:
backupstore-poll-interval:
taint-toleration:
priority-class:
auto-salvage:
auto-delete-pod-when-volume-detached-unexpectedly:
disable-scheduling-on-cordoned-node:
replica-zone-soft-anti-affinity:
volume-attachment-recovery-policy:
node-down-pod-deletion-policy:
allow-node-drain-with-last-healthy-replica:true
mkfs-ext4-parameters:'abc'
disable-replica-rebuild:
replica-replenishment-wait-interval:100
disable-revision-counter:
system-managed-pods-image-pull-policy:
allow-volume-creation-with-degraded-availability:false
auto-cleanup-system-generated-snapshot:
concurrent-automatic-engine-upgrade-per-node-limit:2
backing-image-cleanup-wait-interval:
```
4. Deploy Longhorn using kubectl command. Check the Longhorn setting, the values are not reflected.
**Expected behavior**
User should be able to change the `longhorn-default-setting` and deploy Longhorn with those values.
**Environment:**
- Longhorn version: Longhorn-master `03/01/2021`
- Kubernetes distro (e.g. RKE/K3s/EKS/OpenShift) and version: K8s v1.20.4 - RKE
- Number of management node in the cluster: 1
- Number of worker node in the cluster: 3
- Node config
- OS type and version: Ubuntu 1.20
- CPU per node: 2 vcpus
- Memory per node: 4 GB
- Disk type(e.g. SSD/NVMe): SSD
- Network bandwidth between the nodes: 5 Gigabyte
- Underlying Infrastructure (e.g. on AWS/GCE, EKS/GKE, VMWare/KVM, Baremetal): DO
- Number of Longhorn volumes in the cluster: 10
**Additional context**
Add any other context about the problem here.
",1.0,"[BUG] Updating the config map in the Longhorn yaml doesn't set the values of the settings. - **Describe the bug**
Deploy Longhorn-master after setting some values like `concurrent-automatic-engine-upgrade-per-node-limit` in the Config map in the Longhorn yaml file. The values are not reflected the Longhorn setting once deployed.
**To Reproduce**
Steps to reproduce the behavior:
1. Deploy Longhorn v1.1.0 on a K8s cluster.
2. Create some volumes and attach them to pods.
3. Change the `concurrent-automatic-engine-upgrade-per-node-limit` value to 2 in the config map. Change some more values in the Config map like below.
```
apiVersion: v1
kind: ConfigMap
metadata:
name: longhorn-default-setting
namespace: longhorn-system
data:
default-setting.yaml: |-
backup-target:
backup-target-credential-secret:
allow-recurring-job-while-volume-detached:
create-default-disk-labeled-nodes:
default-data-path:
replica-soft-anti-affinity:
storage-over-provisioning-percentage:
storage-minimal-available-percentage:
upgrade-checker:
default-replica-count:
default-data-locality:
guaranteed-engine-cpu:
default-longhorn-static-storage-class:
backupstore-poll-interval:
taint-toleration:
priority-class:
auto-salvage:
auto-delete-pod-when-volume-detached-unexpectedly:
disable-scheduling-on-cordoned-node:
replica-zone-soft-anti-affinity:
volume-attachment-recovery-policy:
node-down-pod-deletion-policy:
allow-node-drain-with-last-healthy-replica:true
mkfs-ext4-parameters:'abc'
disable-replica-rebuild:
replica-replenishment-wait-interval:100
disable-revision-counter:
system-managed-pods-image-pull-policy:
allow-volume-creation-with-degraded-availability:false
auto-cleanup-system-generated-snapshot:
concurrent-automatic-engine-upgrade-per-node-limit:2
backing-image-cleanup-wait-interval:
```
4. Deploy Longhorn using kubectl command. Check the Longhorn setting, the values are not reflected.
**Expected behavior**
User should be able to change the `longhorn-default-setting` and deploy Longhorn with those values.
**Environment:**
- Longhorn version: Longhorn-master `03/01/2021`
- Kubernetes distro (e.g. RKE/K3s/EKS/OpenShift) and version: K8s v1.20.4 - RKE
- Number of management node in the cluster: 1
- Number of worker node in the cluster: 3
- Node config
- OS type and version: Ubuntu 1.20
- CPU per node: 2 vcpus
- Memory per node: 4 GB
- Disk type(e.g. SSD/NVMe): SSD
- Network bandwidth between the nodes: 5 Gigabyte
- Underlying Infrastructure (e.g. on AWS/GCE, EKS/GKE, VMWare/KVM, Baremetal): DO
- Number of Longhorn volumes in the cluster: 10
**Additional context**
Add any other context about the problem here.
",0, updating the config map in the longhorn yaml doesn t set the values of the settings describe the bug deploy longhorn master after setting some values like concurrent automatic engine upgrade per node limit in the config map in the longhorn yaml file the values are not reflected the longhorn setting once deployed to reproduce steps to reproduce the behavior deploy longhorn on a cluster create some volumes and attach them to pods change the concurrent automatic engine upgrade per node limit value to in the config map change some more values in the config map like below apiversion kind configmap metadata name longhorn default setting namespace longhorn system data default setting yaml backup target backup target credential secret allow recurring job while volume detached create default disk labeled nodes default data path replica soft anti affinity storage over provisioning percentage storage minimal available percentage upgrade checker default replica count default data locality guaranteed engine cpu default longhorn static storage class backupstore poll interval taint toleration priority class auto salvage auto delete pod when volume detached unexpectedly disable scheduling on cordoned node replica zone soft anti affinity volume attachment recovery policy node down pod deletion policy allow node drain with last healthy replica true mkfs parameters abc disable replica rebuild replica replenishment wait interval disable revision counter system managed pods image pull policy allow volume creation with degraded availability false auto cleanup system generated snapshot concurrent automatic engine upgrade per node limit backing image cleanup wait interval deploy longhorn using kubectl command check the longhorn setting the values are not reflected expected behavior user should be able to change the longhorn default setting and deploy longhorn with those values environment longhorn version longhorn master kubernetes distro e g rke eks openshift and version rke number of management node in the cluster number of worker node in the cluster node config os type and version ubuntu cpu per node vcpus memory per node gb disk type e g ssd nvme ssd network bandwidth between the nodes gigabyte underlying infrastructure e g on aws gce eks gke vmware kvm baremetal do number of longhorn volumes in the cluster additional context add any other context about the problem here ,0
9475,28502087666.0,IssuesEvent,2023-04-18 18:10:29,keycloak/keycloak-benchmark,https://api.github.com/repos/keycloak/keycloak-benchmark,opened,Add support to openshift provisioning module to create base entities to support gatling benchmark and dataset scripts,enhancement provision automation dataset,"### Description
Add support to openshift provisioning module to create base entities to support gatling benchmark and dataset scripts
- Similar to minikube module we need to have support to create the needed service account enabled client, test-realm and users to start a benchmark against the Openshift cluster based Keycloak instance.
- Akin to the above point, we also need to be able to deploy dataset module to the Openshift cluster based Keycloak instance and create needed datasets for various entities.
### Discussion
_No response_
### Motivation
_No response_
### Details
The question I have for this particular feature request is along the lines of implementation for gatlinguser task from minikube/Taskfile.yaml
Do we want to create these `tasks` under the `common/Taskfile.yml` ? That way we can simply modify the `gatlinguser` to pick up the `KC_HOSTNAME_SUFFIX` from the `.env` file to be under a conditional if block to only run the varied version of the below bash command when the `KC_HOSTNAME_SUFFIX` variable exists in the context.
Suggested Change:
```
- >
bash -c '
if [ ""{{.KC_HOSTNAME_SUFFIX}}"" != """" ];
then ../keycloak-cli/keycloak/bin/kcadm.sh config credentials --server https://keycloak.{{.KC_HOSTNAME_SUFFIX}}/ --realm master --user admin --password admin;
else ../keycloak-cli/keycloak/bin/kcadm.sh config credentials --server https://keycloak.{{.IP}}.nip.io/ --realm master --user admin --password admin;
fi'
```
For the dataset module, I believe there would be no changes needed, but would need some testing to make sure we don't hit any runtime issues.",1.0,"Add support to openshift provisioning module to create base entities to support gatling benchmark and dataset scripts - ### Description
Add support to openshift provisioning module to create base entities to support gatling benchmark and dataset scripts
- Similar to minikube module we need to have support to create the needed service account enabled client, test-realm and users to start a benchmark against the Openshift cluster based Keycloak instance.
- Akin to the above point, we also need to be able to deploy dataset module to the Openshift cluster based Keycloak instance and create needed datasets for various entities.
### Discussion
_No response_
### Motivation
_No response_
### Details
The question I have for this particular feature request is along the lines of implementation for gatlinguser task from minikube/Taskfile.yaml
Do we want to create these `tasks` under the `common/Taskfile.yml` ? That way we can simply modify the `gatlinguser` to pick up the `KC_HOSTNAME_SUFFIX` from the `.env` file to be under a conditional if block to only run the varied version of the below bash command when the `KC_HOSTNAME_SUFFIX` variable exists in the context.
Suggested Change:
```
- >
bash -c '
if [ ""{{.KC_HOSTNAME_SUFFIX}}"" != """" ];
then ../keycloak-cli/keycloak/bin/kcadm.sh config credentials --server https://keycloak.{{.KC_HOSTNAME_SUFFIX}}/ --realm master --user admin --password admin;
else ../keycloak-cli/keycloak/bin/kcadm.sh config credentials --server https://keycloak.{{.IP}}.nip.io/ --realm master --user admin --password admin;
fi'
```
For the dataset module, I believe there would be no changes needed, but would need some testing to make sure we don't hit any runtime issues.",1,add support to openshift provisioning module to create base entities to support gatling benchmark and dataset scripts description add support to openshift provisioning module to create base entities to support gatling benchmark and dataset scripts similar to minikube module we need to have support to create the needed service account enabled client test realm and users to start a benchmark against the openshift cluster based keycloak instance akin to the above point we also need to be able to deploy dataset module to the openshift cluster based keycloak instance and create needed datasets for various entities discussion no response motivation no response details the question i have for this particular feature request is along the lines of implementation for gatlinguser task from minikube taskfile yaml do we want to create these tasks under the common taskfile yml that way we can simply modify the gatlinguser to pick up the kc hostname suffix from the env file to be under a conditional if block to only run the varied version of the below bash command when the kc hostname suffix variable exists in the context suggested change bash c if then keycloak cli keycloak bin kcadm sh config credentials server realm master user admin password admin else keycloak cli keycloak bin kcadm sh config credentials server realm master user admin password admin fi for the dataset module i believe there would be no changes needed but would need some testing to make sure we don t hit any runtime issues ,1
8687,2611535966.0,IssuesEvent,2015-02-27 06:05:51,chrsmith/hedgewars,https://api.github.com/repos/chrsmith/hedgewars,closed,Keyboard Layout,auto-migrated Priority-Medium Type-Defect,"```
IF i have russian keyboard layout turned on my OSX 10.7.5 some buttons in game
e.g. P,T etc doesnt work.
```
Original issue reported on code.google.com by `maxis...@gmail.com` on 18 Jan 2014 at 10:08
* Merged into: #192",1.0,"Keyboard Layout - ```
IF i have russian keyboard layout turned on my OSX 10.7.5 some buttons in game
e.g. P,T etc doesnt work.
```
Original issue reported on code.google.com by `maxis...@gmail.com` on 18 Jan 2014 at 10:08
* Merged into: #192",0,keyboard layout if i have russian keyboard layout turned on my osx some buttons in game e g p t etc doesnt work original issue reported on code google com by maxis gmail com on jan at merged into ,0
1828,10888576553.0,IssuesEvent,2019-11-18 16:30:30,sourcegraph/sourcegraph,https://api.github.com/repos/sourcegraph/sourcegraph,opened,DRAFT!/WIP! Automation Tracking Issue 3.11,automation,"### Goals
* Improve stability & performance
* Improve developer speed & UX by refactoring and paying off tech debt
* Catch up on features
### Backend
- [ ] For some repositories `gitserver` cannot apply the diff #6625
- [ ] Continuously refactor existing code, pay off tech debt and continuously improve developer UX: #6572
- [ ] Improve performance, stability and observability when executing `CampaignPlans` and `createCampaign`
- [ ] Set an upper time limit on `CampaignJob` execution
- [ ] Add metrics/tracing to `previewCampaignPlan` and `createCampaign`
- [ ] Use a persistent queue instead of goroutines (see #6572 ""No persistent queue"")
- [ ] Execute `ChangesetJob`s in parallel (see #6572 ""GitHub rate limit and abuse detection"")
- [ ] Correctly handle deletions
- [ ] Define and implement what happens when a {repository,external-service} gets deleted
- [ ] Non-manual Campaigns cannot be deleted due to foreign-key constraint #6659
- [ ] Implement `retryCampaign` that retries the subprocesses of `createCampaign`
- [ ] Make `ChangesetJob` execution idempotent: check that new commits are not added to same branch, check for `ErrAlreadyExists` response from code hosts
- [ ] Implement `cancelCampaignPlan` so that all jobs are cancelled
- [ ] Create changesets for a repositories default branch (right now we open the PR for `master`, [see here](https://github.com/sourcegraph/sourcegraph/blob/5ade8c1edc52688387673eebe0fcd2db033720bc/enterprise/pkg/a8n/service.go#L210))
- [ ] More efficient and stable `Changeset` syncing
- [ ] Bitbucket Server webhooks ([RFC](https://docs.google.com/document/d/18RStJNmD9BswkjDwDVDDe792j2UavgqHB_hHE9i3npc/edit))
- [ ] Syncing 24 GitHub pull requests fails due to GraphQL node limit reached #6658
- [ ] Gracefully handle changeset deletion in code hosts #6396
- [ ] Heuristic syncing of Changesets and ChangesetEvents #6388
- [ ] Only ""update"" a changeset when it actually changed.
### Frontend
- [ ] Add snapshot tests
- [ ] Cancel the previous preview with `switchMap`
- [ ] ...
### Frontend & Backend
- [ ] Show the combined event timeline from all changesets
- [ ] GraphQL schema
- [ ] Show comments
- [ ] Show reviews
- [ ] Shows status of all changesets in the campaign and allow querying/filtering by the following fields
- [ ] Show ""commit statuses""
- [ ] Show ""labels""
- [ ] Filtering fields via GraphQL API
- [ ] Filter by ""open/merged/closed""
- [ ] Filter by ""commit statuses""
- [ ] Filter by ""review status""
- [ ] Filter by ""labels""
- [ ] Show the set of participants involved in the campaign
- [ ] GraphQL schema
- [ ] Rename `campaign.{name,description}` to `campaign.{title,body}`
### Internal User Testing
- [ ] Test the user flow with a colleague at Sourcegraph
",1.0,"DRAFT!/WIP! Automation Tracking Issue 3.11 - ### Goals
* Improve stability & performance
* Improve developer speed & UX by refactoring and paying off tech debt
* Catch up on features
### Backend
- [ ] For some repositories `gitserver` cannot apply the diff #6625
- [ ] Continuously refactor existing code, pay off tech debt and continuously improve developer UX: #6572
- [ ] Improve performance, stability and observability when executing `CampaignPlans` and `createCampaign`
- [ ] Set an upper time limit on `CampaignJob` execution
- [ ] Add metrics/tracing to `previewCampaignPlan` and `createCampaign`
- [ ] Use a persistent queue instead of goroutines (see #6572 ""No persistent queue"")
- [ ] Execute `ChangesetJob`s in parallel (see #6572 ""GitHub rate limit and abuse detection"")
- [ ] Correctly handle deletions
- [ ] Define and implement what happens when a {repository,external-service} gets deleted
- [ ] Non-manual Campaigns cannot be deleted due to foreign-key constraint #6659
- [ ] Implement `retryCampaign` that retries the subprocesses of `createCampaign`
- [ ] Make `ChangesetJob` execution idempotent: check that new commits are not added to same branch, check for `ErrAlreadyExists` response from code hosts
- [ ] Implement `cancelCampaignPlan` so that all jobs are cancelled
- [ ] Create changesets for a repositories default branch (right now we open the PR for `master`, [see here](https://github.com/sourcegraph/sourcegraph/blob/5ade8c1edc52688387673eebe0fcd2db033720bc/enterprise/pkg/a8n/service.go#L210))
- [ ] More efficient and stable `Changeset` syncing
- [ ] Bitbucket Server webhooks ([RFC](https://docs.google.com/document/d/18RStJNmD9BswkjDwDVDDe792j2UavgqHB_hHE9i3npc/edit))
- [ ] Syncing 24 GitHub pull requests fails due to GraphQL node limit reached #6658
- [ ] Gracefully handle changeset deletion in code hosts #6396
- [ ] Heuristic syncing of Changesets and ChangesetEvents #6388
- [ ] Only ""update"" a changeset when it actually changed.
### Frontend
- [ ] Add snapshot tests
- [ ] Cancel the previous preview with `switchMap`
- [ ] ...
### Frontend & Backend
- [ ] Show the combined event timeline from all changesets
- [ ] GraphQL schema
- [ ] Show comments
- [ ] Show reviews
- [ ] Shows status of all changesets in the campaign and allow querying/filtering by the following fields
- [ ] Show ""commit statuses""
- [ ] Show ""labels""
- [ ] Filtering fields via GraphQL API
- [ ] Filter by ""open/merged/closed""
- [ ] Filter by ""commit statuses""
- [ ] Filter by ""review status""
- [ ] Filter by ""labels""
- [ ] Show the set of participants involved in the campaign
- [ ] GraphQL schema
- [ ] Rename `campaign.{name,description}` to `campaign.{title,body}`
### Internal User Testing
- [ ] Test the user flow with a colleague at Sourcegraph
",1,draft wip automation tracking issue goals improve stability performance improve developer speed ux by refactoring and paying off tech debt catch up on features backend for some repositories gitserver cannot apply the diff continuously refactor existing code pay off tech debt and continuously improve developer ux improve performance stability and observability when executing campaignplans and createcampaign set an upper time limit on campaignjob execution add metrics tracing to previewcampaignplan and createcampaign use a persistent queue instead of goroutines see no persistent queue execute changesetjob s in parallel see github rate limit and abuse detection correctly handle deletions define and implement what happens when a repository external service gets deleted non manual campaigns cannot be deleted due to foreign key constraint implement retrycampaign that retries the subprocesses of createcampaign make changesetjob execution idempotent check that new commits are not added to same branch check for erralreadyexists response from code hosts implement cancelcampaignplan so that all jobs are cancelled create changesets for a repositories default branch right now we open the pr for master more efficient and stable changeset syncing bitbucket server webhooks syncing github pull requests fails due to graphql node limit reached gracefully handle changeset deletion in code hosts heuristic syncing of changesets and changesetevents only update a changeset when it actually changed frontend add snapshot tests cancel the previous preview with switchmap frontend backend show the combined event timeline from all changesets graphql schema show comments show reviews shows status of all changesets in the campaign and allow querying filtering by the following fields show commit statuses show labels filtering fields via graphql api filter by open merged closed filter by commit statuses filter by review status filter by labels show the set of participants involved in the campaign graphql schema rename campaign name description to campaign title body internal user testing test the user flow with a colleague at sourcegraph ,1
105786,13217755994.0,IssuesEvent,2020-08-17 07:26:38,shopsys/shopsys,https://api.github.com/repos/shopsys/shopsys,closed,Admin header wrapping without buttons,Design & Apperance,"
### What is happening
The text is wrapping in admin header.

### Expected result
I believe that we should not have the empty div for buttons (if it is empty)

maybe that would help a bit.

",1.0,"Admin header wrapping without buttons -
### What is happening
The text is wrapping in admin header.

### Expected result
I believe that we should not have the empty div for buttons (if it is empty)

maybe that would help a bit.

",0,admin header wrapping without buttons what is happening the text is wrapping in admin header expected result i believe that we should not have the empty div for buttons if it is empty maybe that would help a bit ,0
748585,26128726632.0,IssuesEvent,2022-12-28 23:25:05,Ore-Design/Ore-3D-Reports-Changelog,https://api.github.com/repos/Ore-Design/Ore-3D-Reports-Changelog,closed,Bug: Capsule/You Products using Incorrect BOM Components [1.5.6],bug in progress medium priority,"Example:
Anything with Bowl
ex.
A1206 - too much sheet material, no bowl
S1206 - too much sheet material, no bowl
A1235 - too much sheet material, no bowl
A1237 - too much sheet material, no bowl
A1209 - too much sheet material, no bowl",1.0,"Bug: Capsule/You Products using Incorrect BOM Components [1.5.6] - Example:
Anything with Bowl
ex.
A1206 - too much sheet material, no bowl
S1206 - too much sheet material, no bowl
A1235 - too much sheet material, no bowl
A1237 - too much sheet material, no bowl
A1209 - too much sheet material, no bowl",0,bug capsule you products using incorrect bom components example anything with bowl ex too much sheet material no bowl too much sheet material no bowl too much sheet material no bowl too much sheet material no bowl too much sheet material no bowl,0
3002,12966077422.0,IssuesEvent,2020-07-20 23:52:15,longhorn/longhorn,https://api.github.com/repos/longhorn/longhorn,closed,[BUG] Backup List issue when failing to retrieve backup names for backup volume,bug priority/2 require/automation-e2e require/automation-engine,"**Describe the bug**
When trying to List all Backup Volumes and there is an error to retrieve the backupnames via `getBackupNamesForVolume` for a volume, we currently return an error back to the caller instead of setting the error as part of the VolumeInfo object. This blocks the ui/api from showing all available backup volumes.
**To Reproduce**
This is just one of many possible failure cases, but this one is easy to reproduce
Setup:
- create vol `bak1`, `bak2`, `bak3` and attach to nodes
- write some data to all volumes
- take a backup of each volume
Repro:
- create a file named: `backup_1234@failure.cfg` inside of the backups folder for volume `bak2`
- now list backup volumes will fail and you should no longer see backup volumes
- you should still be able to see backups for `bak1`, `bak3` if manually requested via the api
**Expected behavior**
Show available backup volumes and backups even if a single backup volume has issues.
",2.0,"[BUG] Backup List issue when failing to retrieve backup names for backup volume - **Describe the bug**
When trying to List all Backup Volumes and there is an error to retrieve the backupnames via `getBackupNamesForVolume` for a volume, we currently return an error back to the caller instead of setting the error as part of the VolumeInfo object. This blocks the ui/api from showing all available backup volumes.
**To Reproduce**
This is just one of many possible failure cases, but this one is easy to reproduce
Setup:
- create vol `bak1`, `bak2`, `bak3` and attach to nodes
- write some data to all volumes
- take a backup of each volume
Repro:
- create a file named: `backup_1234@failure.cfg` inside of the backups folder for volume `bak2`
- now list backup volumes will fail and you should no longer see backup volumes
- you should still be able to see backups for `bak1`, `bak3` if manually requested via the api
**Expected behavior**
Show available backup volumes and backups even if a single backup volume has issues.
",1, backup list issue when failing to retrieve backup names for backup volume describe the bug when trying to list all backup volumes and there is an error to retrieve the backupnames via getbackupnamesforvolume for a volume we currently return an error back to the caller instead of setting the error as part of the volumeinfo object this blocks the ui api from showing all available backup volumes to reproduce this is just one of many possible failure cases but this one is easy to reproduce setup create vol and attach to nodes write some data to all volumes take a backup of each volume repro create a file named backup failure cfg inside of the backups folder for volume now list backup volumes will fail and you should no longer see backup volumes you should still be able to see backups for if manually requested via the api expected behavior show available backup volumes and backups even if a single backup volume has issues ,1
3302,2610060252.0,IssuesEvent,2015-02-26 18:17:45,chrsmith/jsjsj122,https://api.github.com/repos/chrsmith/jsjsj122,opened,路桥治疗不育哪里效果最好,auto-migrated Priority-Medium Type-Defect,"```
路桥治疗不育哪里效果最好【台州五洲生殖医院】24小时健康
咨询热线:0576-88066933-(扣扣800080609)-(微信号tzwzszyy)医院地址:台
州市椒江区枫南路229号(枫南大转盘旁)乘车线路:乘坐104、1
08、118、198及椒江一金清公交车直达枫南小区,乘坐107、105、
109、112、901、 902公交车到星星广场下车,步行即可到院。
诊疗项目:阳痿,早泄,前列腺炎,前列腺增生,龟头炎,��
�精,无精。包皮包茎,精索静脉曲张,淋病等。
台州五洲生殖医院是台州最大的男科医院,权威专家在线免��
�咨询,拥有专业完善的男科检查治疗设备,严格按照国家标�
��收费。尖端医疗设备,与世界同步。权威专家,成就专业典
范。人性化服务,一切以患者为中心。
看男科就选台州五洲生殖医院,专业男科为男人。
```
-----
Original issue reported on code.google.com by `poweragr...@gmail.com` on 30 May 2014 at 7:08",1.0,"路桥治疗不育哪里效果最好 - ```
路桥治疗不育哪里效果最好【台州五洲生殖医院】24小时健康
咨询热线:0576-88066933-(扣扣800080609)-(微信号tzwzszyy)医院地址:台
州市椒江区枫南路229号(枫南大转盘旁)乘车线路:乘坐104、1
08、118、198及椒江一金清公交车直达枫南小区,乘坐107、105、
109、112、901、 902公交车到星星广场下车,步行即可到院。
诊疗项目:阳痿,早泄,前列腺炎,前列腺增生,龟头炎,��
�精,无精。包皮包茎,精索静脉曲张,淋病等。
台州五洲生殖医院是台州最大的男科医院,权威专家在线免��
�咨询,拥有专业完善的男科检查治疗设备,严格按照国家标�
��收费。尖端医疗设备,与世界同步。权威专家,成就专业典
范。人性化服务,一切以患者为中心。
看男科就选台州五洲生殖医院,专业男科为男人。
```
-----
Original issue reported on code.google.com by `poweragr...@gmail.com` on 30 May 2014 at 7:08",0,路桥治疗不育哪里效果最好 路桥治疗不育哪里效果最好【台州五洲生殖医院】 咨询热线 微信号tzwzszyy 医院地址 台 (枫南大转盘旁)乘车线路 、 、 、 , 、 、 、 、 、 ,步行即可到院。 诊疗项目:阳痿,早泄,前列腺炎,前列腺增生,龟头炎,�� �精,无精。包皮包茎,精索静脉曲张,淋病等。 台州五洲生殖医院是台州最大的男科医院,权威专家在线免�� �咨询,拥有专业完善的男科检查治疗设备,严格按照国家标� ��收费。尖端医疗设备,与世界同步。权威专家,成就专业典 范。人性化服务,一切以患者为中心。 看男科就选台州五洲生殖医院,专业男科为男人。 original issue reported on code google com by poweragr gmail com on may at ,0
23536,4955656969.0,IssuesEvent,2016-12-01 21:04:00,easy-updates-manager/easy-updates-manager,https://api.github.com/repos/easy-updates-manager/easy-updates-manager,closed,Easy Updates Manager doesn't log updates done through Jetpack Manage,documentation,"When using the Jetpack Manage part of Jetpack, you have the ability to update plugins remotely through wordpress.com. If you use this, it doesn't log the updates through Easy Updates Manager.
Perhaps a FAQ should be created to let users know that Easy Updates Manager doesn't log updates that are done through external software.
",1.0,"Easy Updates Manager doesn't log updates done through Jetpack Manage - When using the Jetpack Manage part of Jetpack, you have the ability to update plugins remotely through wordpress.com. If you use this, it doesn't log the updates through Easy Updates Manager.
Perhaps a FAQ should be created to let users know that Easy Updates Manager doesn't log updates that are done through external software.
",0,easy updates manager doesn t log updates done through jetpack manage when using the jetpack manage part of jetpack you have the ability to update plugins remotely through wordpress com if you use this it doesn t log the updates through easy updates manager perhaps a faq should be created to let users know that easy updates manager doesn t log updates that are done through external software ,0
267783,23319576194.0,IssuesEvent,2022-08-08 15:12:57,splendo/kaluga,https://api.github.com/repos/splendo/kaluga,opened,Delayed verification for mock methods,component:test-utils,"Sometimes you want to verify a method is called a number of times within a certain period (e.g. at least twice within a second, `verifyWithin(duration = 1.seconds, times = 2)` or over a certain time (e.g. between 2 and 5 times over a second, `verfiyOver(duration = 1.seconds, times = 2...5)`
This also covers the specific case of #541",1.0,"Delayed verification for mock methods - Sometimes you want to verify a method is called a number of times within a certain period (e.g. at least twice within a second, `verifyWithin(duration = 1.seconds, times = 2)` or over a certain time (e.g. between 2 and 5 times over a second, `verfiyOver(duration = 1.seconds, times = 2...5)`
This also covers the specific case of #541",0,delayed verification for mock methods sometimes you want to verify a method is called a number of times within a certain period e g at least twice within a second verifywithin duration seconds times or over a certain time e g between and times over a second verfiyover duration seconds times this also covers the specific case of ,0
4190,15770441328.0,IssuesEvent,2021-03-31 19:26:17,jessicamorris/jessicamorris.github.io,https://api.github.com/repos/jessicamorris/jessicamorris.github.io,closed,Automate gh-pages update on commits to main branch,automation,"GitHub's got support for repo automation, surely I can make changes auto-deploy.
More on GitHub actions: https://docs.github.com/en/actions
Acceptance criteria:
- The page at jessicamorris.github.io/ automatically updates when the `main` branch changes.",1.0,"Automate gh-pages update on commits to main branch - GitHub's got support for repo automation, surely I can make changes auto-deploy.
More on GitHub actions: https://docs.github.com/en/actions
Acceptance criteria:
- The page at jessicamorris.github.io/ automatically updates when the `main` branch changes.",1,automate gh pages update on commits to main branch github s got support for repo automation surely i can make changes auto deploy more on github actions acceptance criteria the page at jessicamorris github io automatically updates when the main branch changes ,1
8070,26149172922.0,IssuesEvent,2022-12-30 10:46:23,elastic/e2e-testing,https://api.github.com/repos/elastic/e2e-testing,closed,Download stack logs from the AWS instance to the Jenkins worker,enhancement Team:Automation size:S triaged area:ci,"It will allow troubleshooting the Stack deployment better, as now it's needed to SSH into the machine, and monitor the logs",1.0,"Download stack logs from the AWS instance to the Jenkins worker - It will allow troubleshooting the Stack deployment better, as now it's needed to SSH into the machine, and monitor the logs",1,download stack logs from the aws instance to the jenkins worker it will allow troubleshooting the stack deployment better as now it s needed to ssh into the machine and monitor the logs,1
108,3779429684.0,IssuesEvent,2016-03-18 08:15:17,eclipse/smarthome,https://api.github.com/repos/eclipse/smarthome,opened,Propolsals for changes in Automation engine API,Automation,"1) Type of Input/output defaultValue property is better to be changed from Object to String. In this way the default value will be presented in same way as default value of ConfigDescriptionParameter. Also the rule engine does not know what kind of object to create. Conversion from string to object has to be served by the handler because it knows how to handle this value.
2) At the moment, types of inputs and outputs are defined as fully qualified names. I’m not sure if it usable for the people which does not know java (i.e. javascript developer defining rules through the JSON definition). The proposal is the type to be just a string and validation to be based on equality of input and output as strings.
3) Configuration (which contains values for configuration properties) of Module, Rule, RuleTemplate at the moment is defined as Map. Our proposal is the configuration values to be presented as Map and stored as String. In that way the type of the will be defined and it will be presented in the same way as default value of configuration property. Also the configuration values will be easily serialized/deserialized.
",1.0,"Propolsals for changes in Automation engine API - 1) Type of Input/output defaultValue property is better to be changed from Object to String. In this way the default value will be presented in same way as default value of ConfigDescriptionParameter. Also the rule engine does not know what kind of object to create. Conversion from string to object has to be served by the handler because it knows how to handle this value.
2) At the moment, types of inputs and outputs are defined as fully qualified names. I’m not sure if it usable for the people which does not know java (i.e. javascript developer defining rules through the JSON definition). The proposal is the type to be just a string and validation to be based on equality of input and output as strings.
3) Configuration (which contains values for configuration properties) of Module, Rule, RuleTemplate at the moment is defined as Map. Our proposal is the configuration values to be presented as Map and stored as String. In that way the type of the will be defined and it will be presented in the same way as default value of configuration property. Also the configuration values will be easily serialized/deserialized.
",1,propolsals for changes in automation engine api type of input output defaultvalue property is better to be changed from object to string in this way the default value will be presented in same way as default value of configdescriptionparameter also the rule engine does not know what kind of object to create conversion from string to object has to be served by the handler because it knows how to handle this value at the moment types of inputs and outputs are defined as fully qualified names i’m not sure if it usable for the people which does not know java i e javascript developer defining rules through the json definition the proposal is the type to be just a string and validation to be based on equality of input and output as strings configuration which contains values for configuration properties of module rule ruletemplate at the moment is defined as map our proposal is the configuration values to be presented as map and stored as string in that way the type of the will be defined and it will be presented in the same way as default value of configuration property also the configuration values will be easily serialized deserialized ,1
802487,28964130953.0,IssuesEvent,2023-05-10 06:32:30,alkem-io/client-web,https://api.github.com/repos/alkem-io/client-web,reopened,BUG: Banners on Space cards incorrect on pages,bug client User High Priority,"**Describe the bug**
The banners on the cards for the Spaces are showing different dimensions for the banners on the various pages (search, home, profile).
**To Reproduce**
Steps to reproduce the behavior:
1. Search for the Publieke Dienstverlening Space on the search page
2. See the card with incorrectly cropped banner
3. Go to User profile page of Jet Klaver
4. See correct banner on Publieke Dienstverlening Space card
**Expected behavior**
Cards for Spaces on the Home page and Search page must use Card banner instead of Page banner (I think this solves it?)
**Screenshots**
",1.0,"BUG: Banners on Space cards incorrect on pages - **Describe the bug**
The banners on the cards for the Spaces are showing different dimensions for the banners on the various pages (search, home, profile).
**To Reproduce**
Steps to reproduce the behavior:
1. Search for the Publieke Dienstverlening Space on the search page
2. See the card with incorrectly cropped banner
3. Go to User profile page of Jet Klaver
4. See correct banner on Publieke Dienstverlening Space card
**Expected behavior**
Cards for Spaces on the Home page and Search page must use Card banner instead of Page banner (I think this solves it?)
**Screenshots**
",0,bug banners on space cards incorrect on pages describe the bug the banners on the cards for the spaces are showing different dimensions for the banners on the various pages search home profile to reproduce steps to reproduce the behavior search for the publieke dienstverlening space on the search page see the card with incorrectly cropped banner go to user profile page of jet klaver see correct banner on publieke dienstverlening space card expected behavior cards for spaces on the home page and search page must use card banner instead of page banner i think this solves it screenshots ,0
6646,3038729620.0,IssuesEvent,2015-08-07 01:05:59,atom/atom,https://api.github.com/repos/atom/atom,closed,What version of Jasmine is Atom using?,documentation,"In [vendor/jasmine.js](https://github.com/atom/atom/blob/52abb4afc9098454cea8e220a363be3a9b958934/vendor/jasmine.js#L2662), I found that the Jasmine version is 1.3. In [docs/writing-specs.md](https://raw.githubusercontent.com/atom/atom/2f62346c585361591e6a9de7349401c7ebe360eb/docs/writing-specs.md), I found links to both Jasmine 1.3 and 2.0. It would be nice to document the version of Jasmine used in Atom and correct the wrong link(s) in `writing-specs.md`.",1.0,"What version of Jasmine is Atom using? - In [vendor/jasmine.js](https://github.com/atom/atom/blob/52abb4afc9098454cea8e220a363be3a9b958934/vendor/jasmine.js#L2662), I found that the Jasmine version is 1.3. In [docs/writing-specs.md](https://raw.githubusercontent.com/atom/atom/2f62346c585361591e6a9de7349401c7ebe360eb/docs/writing-specs.md), I found links to both Jasmine 1.3 and 2.0. It would be nice to document the version of Jasmine used in Atom and correct the wrong link(s) in `writing-specs.md`.",0,what version of jasmine is atom using in i found that the jasmine version is in i found links to both jasmine and it would be nice to document the version of jasmine used in atom and correct the wrong link s in writing specs md ,0
169840,20841949756.0,IssuesEvent,2022-03-21 01:55:58,turkdevops/graphql-tools,https://api.github.com/repos/turkdevops/graphql-tools,opened,"CVE-2022-24771 (High) detected in forge0.10.0, zimphonyzimbra-domain-admin-1.2",security vulnerability,"## CVE-2022-24771 - High Severity Vulnerability
Vulnerable Libraries - forge0.10.0, zimphonyzimbra-domain-admin-1.2
Vulnerability Details
Forge (also called `node-forge`) is a native implementation of Transport Layer Security in JavaScript. Prior to version 1.3.0, RSA PKCS#1 v1.5 signature verification code is lenient in checking the digest algorithm structure. This can allow a crafted structure that steals padding bytes and uses unchecked portion of the PKCS#1 encoded message to forge a signature when a low public exponent is being used. The issue has been addressed in `node-forge` version 1.3.0. There are currently no known workarounds.
***
Step up your Open Source Security Game with WhiteSource [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)",True,"CVE-2022-24771 (High) detected in forge0.10.0, zimphonyzimbra-domain-admin-1.2 - ## CVE-2022-24771 - High Severity Vulnerability
Vulnerable Libraries - forge0.10.0, zimphonyzimbra-domain-admin-1.2
Vulnerability Details
Forge (also called `node-forge`) is a native implementation of Transport Layer Security in JavaScript. Prior to version 1.3.0, RSA PKCS#1 v1.5 signature verification code is lenient in checking the digest algorithm structure. This can allow a crafted structure that steals padding bytes and uses unchecked portion of the PKCS#1 encoded message to forge a signature when a low public exponent is being used. The issue has been addressed in `node-forge` version 1.3.0. There are currently no known workarounds.
***
Step up your Open Source Security Game with WhiteSource [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)",0,cve high detected in zimphonyzimbra domain admin cve high severity vulnerability vulnerable libraries zimphonyzimbra domain admin vulnerability details forge also called node forge is a native implementation of transport layer security in javascript prior to version rsa pkcs signature verification code is lenient in checking the digest algorithm structure this can allow a crafted structure that steals padding bytes and uses unchecked portion of the pkcs encoded message to forge a signature when a low public exponent is being used the issue has been addressed in node forge version there are currently no known workarounds publish date url a href cvss score details base score metrics exploitability metrics attack vector network attack complexity low privileges required none user interaction none scope unchanged impact metrics confidentiality impact none integrity impact high availability impact none for more information on scores click a href suggested fix type upgrade version origin a href release date fix resolution node forge step up your open source security game with whitesource ,0
152782,19696981458.0,IssuesEvent,2022-01-12 13:11:40,jtimberlake/serverless-artillery,https://api.github.com/repos/jtimberlake/serverless-artillery,closed,WS-2019-0318 (High) detected in handlebars-4.1.0.tgz - autoclosed,security vulnerability,"## WS-2019-0318 - High Severity Vulnerability
Vulnerable Library - handlebars-4.1.0.tgz
Handlebars provides the power necessary to let you build semantic templates effectively with no frustration
In ""showdownjs/showdown"", versions prior to v4.4.5 are vulnerable against Regular expression Denial of Service (ReDOS) once receiving specially-crafted templates.
In ""showdownjs/showdown"", versions prior to v4.4.5 are vulnerable against Regular expression Denial of Service (ReDOS) once receiving specially-crafted templates.
",0,ws high detected in handlebars tgz autoclosed ws high severity vulnerability vulnerable library handlebars tgz handlebars provides the power necessary to let you build semantic templates effectively with no frustration library home page a href path to dependency file serverless artillery package json path to vulnerable library serverless artillery node modules nyc node modules handlebars package json dependency hierarchy nyc tgz root library istanbul reports tgz x handlebars tgz vulnerable library found in head commit a href found in base branch master vulnerability details in showdownjs showdown versions prior to are vulnerable against regular expression denial of service redos once receiving specially crafted templates publish date url a href cvss score details base score metrics exploitability metrics attack vector network attack complexity low privileges required none user interaction none scope unchanged impact metrics confidentiality impact none integrity impact none availability impact high for more information on scores click a href suggested fix type upgrade version origin a href release date fix resolution handlebars isopenpronvulnerability true ispackagebased true isdefaultbranch true packages istransitivedependency true dependencytree nyc istanbul reports handlebars isminimumfixversionavailable true minimumfixversion handlebars basebranches vulnerabilityidentifier ws vulnerabilitydetails in showdownjs showdown versions prior to are vulnerable against regular expression denial of service redos once receiving specially crafted templates vulnerabilityurl ,0
3537,13924603387.0,IssuesEvent,2020-10-21 15:45:31,elastic/apm-integration-testing,https://api.github.com/repos/elastic/apm-integration-testing,closed,--no-kibana does not remove APM server kibana flags,automation bug team:automation,"Running the following command
```
python3 scripts/compose.py start 8.0.0 --no-kibana
```
This is the command line for the APM Sever, it contains Kibana settings and should not
```
/usr/local/bin/docker-entrypoint apm-server ... -E apm-server.kibana.enabled=true -E apm-server.kibana.host=kibana:5601 -E apm-server.kibana.username=apm_server_user -E apm-server.kibana.password=changeme ...
```",2.0,"--no-kibana does not remove APM server kibana flags - Running the following command
```
python3 scripts/compose.py start 8.0.0 --no-kibana
```
This is the command line for the APM Sever, it contains Kibana settings and should not
```
/usr/local/bin/docker-entrypoint apm-server ... -E apm-server.kibana.enabled=true -E apm-server.kibana.host=kibana:5601 -E apm-server.kibana.username=apm_server_user -E apm-server.kibana.password=changeme ...
```",1, no kibana does not remove apm server kibana flags running the following command scripts compose py start no kibana this is the command line for the apm sever it contains kibana settings and should not usr local bin docker entrypoint apm server e apm server kibana enabled true e apm server kibana host kibana e apm server kibana username apm server user e apm server kibana password changeme ,1
282012,21315455539.0,IssuesEvent,2022-04-16 07:31:26,kaiyichen/pe,https://api.github.com/repos/kaiyichen/pe,opened,Wrong format of class diagram for add command in developer guide,type.DocumentationBug severity.VeryLow,"
abstract classes should EITHER include `{abstract}` or be italic. Should not be both
",1.0,"Wrong format of class diagram for add command in developer guide - 
abstract classes should EITHER include `{abstract}` or be italic. Should not be both
",0,wrong format of class diagram for add command in developer guide abstract classes should either include abstract or be italic should not be both ,0
20366,6035018330.0,IssuesEvent,2017-06-09 12:52:10,EEA-Norway-Grants/dataviz,https://api.github.com/repos/EEA-Norway-Grants/dataviz,opened,"give up tabs logic from components, use a tabs ""widget"" in sidebar",Type: Code quality,"The sidebar shouldn't have any tab-related logic, and neither should the sub-components. We should end up with something like this in the main template:
```html
```
Before we start writing our own tab component, evaluate this project, it looks very much ok: https://github.com/spatie/vue-tabs-component
(It's also debatable if the sidebar has any business being a separate component, but we'll see about that later.)
",1.0,"give up tabs logic from components, use a tabs ""widget"" in sidebar - The sidebar shouldn't have any tab-related logic, and neither should the sub-components. We should end up with something like this in the main template:
```html
```
Before we start writing our own tab component, evaluate this project, it looks very much ok: https://github.com/spatie/vue-tabs-component
(It's also debatable if the sidebar has any business being a separate component, but we'll see about that later.)
",0,give up tabs logic from components use a tabs widget in sidebar the sidebar shouldn t have any tab related logic and neither should the sub components we should end up with something like this in the main template html before we start writing our own tab component evaluate this project it looks very much ok it s also debatable if the sidebar has any business being a separate component but we ll see about that later ,0
1662,10550431956.0,IssuesEvent,2019-10-03 11:02:43,DevExpress/testcafe,https://api.github.com/repos/DevExpress/testcafe,opened,Can't scroll to the drag target element in iframe in IE11,AREA: client FREQUENCY: level 1 HAS WORKAROUND SYSTEM: automations TYPE: bug,"
### What is your Test Scenario?
Drag an element located in an iframe, that is not visible in the viewport.
### What is the Current behavior?
TestCafe fails with the `Element doesn't exist` error.
Workaround: add a step, that hovers iframe's ``.
```js
await t.switchToIframe('iframe');
await t.hover('body')
```
### What is the Expected behavior?
TestCafe should scroll iframe's parent and be able to drag the target element.
### What is your web application and your TestCafe test code?
Your website URL (or attach your complete example): https://demos.devexpress.com/Bootstrap/GridView/Adaptivity.aspx
Your complete test code (or attach your test files):
```js
fixture`test`.page`https://demos.devexpress.com/Bootstrap/GridView/Adaptivity.aspx`;
test('test', async t => {
await t.resizeWindow(1420, 760);
await t.switchToIframe('#content > div:nth-child(11) > div.demo-device-container > div.demo-device.bg-secondary.border.border-secondary.qrcode-container > div > iframe');
// NOTE: uncomment the line below to fix the test
// await t.hover('body')
await t.drag('#ctl05_GridViewAdaptiveLayout_col1', 200, 0);
});
```
Your complete configuration file (if any):
```
```
Your complete test report:
```
```
Screenshots:
```
```
### Steps to Reproduce:
1. Go to my website ...
3. Execute this command...
4. See the error...
### Your Environment details:
* testcafe version: 1.5.0
* node.js version: 10.15.0
* command-line arguments: testcafe ie test.js
* browser name and version: IE 11
* platform and version:
* other:
",1.0,"Can't scroll to the drag target element in iframe in IE11 -
### What is your Test Scenario?
Drag an element located in an iframe, that is not visible in the viewport.
### What is the Current behavior?
TestCafe fails with the `Element doesn't exist` error.
Workaround: add a step, that hovers iframe's ``.
```js
await t.switchToIframe('iframe');
await t.hover('body')
```
### What is the Expected behavior?
TestCafe should scroll iframe's parent and be able to drag the target element.
### What is your web application and your TestCafe test code?
Your website URL (or attach your complete example): https://demos.devexpress.com/Bootstrap/GridView/Adaptivity.aspx
Your complete test code (or attach your test files):
```js
fixture`test`.page`https://demos.devexpress.com/Bootstrap/GridView/Adaptivity.aspx`;
test('test', async t => {
await t.resizeWindow(1420, 760);
await t.switchToIframe('#content > div:nth-child(11) > div.demo-device-container > div.demo-device.bg-secondary.border.border-secondary.qrcode-container > div > iframe');
// NOTE: uncomment the line below to fix the test
// await t.hover('body')
await t.drag('#ctl05_GridViewAdaptiveLayout_col1', 200, 0);
});
```
Your complete configuration file (if any):
```
```
Your complete test report:
```
```
Screenshots:
```
```
### Steps to Reproduce:
1. Go to my website ...
3. Execute this command...
4. See the error...
### Your Environment details:
* testcafe version: 1.5.0
* node.js version: 10.15.0
* command-line arguments: testcafe ie test.js
* browser name and version: IE 11
* platform and version:
* other:
",1,can t scroll to the drag target element in iframe in if you have all reproduction steps with a complete sample app please share as many details as possible in the sections below make sure that you tried using the latest testcafe version where this behavior might have been already addressed before submitting an issue please check contributing md and existing issues in this repository in case a similar issue exists or was already addressed this may save your time and ours what is your test scenario drag an element located in an iframe that is not visible in the viewport what is the current behavior testcafe fails with the element doesn t exist error workaround add a step that hovers iframe s js await t switchtoiframe iframe await t hover body what is the expected behavior testcafe should scroll iframe s parent and be able to drag the target element what is your web application and your testcafe test code your website url or attach your complete example your complete test code or attach your test files js fixture test page test test async t await t resizewindow await t switchtoiframe content div nth child div demo device container div demo device bg secondary border border secondary qrcode container div iframe note uncomment the line below to fix the test await t hover body await t drag gridviewadaptivelayout your complete configuration file if any your complete test report screenshots steps to reproduce go to my website execute this command see the error your environment details testcafe version node js version command line arguments testcafe ie test js browser name and version ie platform and version other ,1
2812,12626373627.0,IssuesEvent,2020-06-14 16:17:02,pysal/submodule_template,https://api.github.com/repos/pysal/submodule_template,opened,Automated merging for conda-forge feedstock,automation,New functionality in `conda-forge` allows for the automated merging of passing PRs in a package's feedstock. It is enabled through the opening of an issue with a specific copy/pasted message. See these issues in the [`mapclassify`](https://github.com/conda-forge/mapclassify-feedstock/issues/11) and [`spaghetti`](https://github.com/conda-forge/spaghetti-feedstock/issues/21#issuecomment-643786627) feedstocks for examples.,1.0,Automated merging for conda-forge feedstock - New functionality in `conda-forge` allows for the automated merging of passing PRs in a package's feedstock. It is enabled through the opening of an issue with a specific copy/pasted message. See these issues in the [`mapclassify`](https://github.com/conda-forge/mapclassify-feedstock/issues/11) and [`spaghetti`](https://github.com/conda-forge/spaghetti-feedstock/issues/21#issuecomment-643786627) feedstocks for examples.,1,automated merging for conda forge feedstock new functionality in conda forge allows for the automated merging of passing prs in a package s feedstock it is enabled through the opening of an issue with a specific copy pasted message see these issues in the and feedstocks for examples ,1
6738,23816331511.0,IssuesEvent,2022-09-05 07:08:53,pingcap/tiflow,https://api.github.com/repos/pingcap/tiflow,closed,cdc cli changefeed remove: Error: [CDC:ErrPDEtcdAPIError]etcd api call error: context deadline exceeded,type/bug severity/minor found/automation area/ticdc,"### What did you do?
- Create 300 changefeed with kafka sink
- Remove all changefeeds one by one
### What did you expect to see?
_No response_
### What did you see instead?
- When removing changefeed 6, cli failed: Error: [CDC:ErrPDEtcdAPIError]etcd api call error: context deadline exceeded
### Versions of the cluster
Upstream TiDB cluster version (execute `SELECT tidb_version();` in a MySQL client):
```console
(paste TiDB cluster version here)
```
Upstream TiKV version (execute `tikv-server --version`):
```console
(paste TiKV version here)
v5.4.2
```
TiCDC version (execute `cdc version`):
```console
(paste TiCDC version here)
v5.4.2
```",1.0,"cdc cli changefeed remove: Error: [CDC:ErrPDEtcdAPIError]etcd api call error: context deadline exceeded - ### What did you do?
- Create 300 changefeed with kafka sink
- Remove all changefeeds one by one
### What did you expect to see?
_No response_
### What did you see instead?
- When removing changefeed 6, cli failed: Error: [CDC:ErrPDEtcdAPIError]etcd api call error: context deadline exceeded
### Versions of the cluster
Upstream TiDB cluster version (execute `SELECT tidb_version();` in a MySQL client):
```console
(paste TiDB cluster version here)
```
Upstream TiKV version (execute `tikv-server --version`):
```console
(paste TiKV version here)
v5.4.2
```
TiCDC version (execute `cdc version`):
```console
(paste TiCDC version here)
v5.4.2
```",1,cdc cli changefeed remove: error etcd api call error context deadline exceeded what did you do create changefeed with kafka sink remove all changefeeds one by one what did you expect to see no response what did you see instead when removing changefeed cli failed error etcd api call error context deadline exceeded versions of the cluster upstream tidb cluster version execute select tidb version in a mysql client console paste tidb cluster version here upstream tikv version execute tikv server version console paste tikv version here ticdc version execute cdc version console paste ticdc version here ,1
8277,26603757324.0,IssuesEvent,2023-01-23 17:36:49,o3de/o3de,https://api.github.com/repos/o3de/o3de,closed,AR Bug Report,kind/bug needs-triage kind/automation,"**Describe the bug**
A clear and concise description of what the bug is.
**Failed Jenkins Job Information:**
The name of the job that failed, job build number, and code snippit of the failure.
**Attachments**
Attach the Jenkins job log as a .txt file and any other relevant information.
**Additional context**
Add any other context about the problem here.
",1.0,"AR Bug Report - **Describe the bug**
A clear and concise description of what the bug is.
**Failed Jenkins Job Information:**
The name of the job that failed, job build number, and code snippit of the failure.
**Attachments**
Attach the Jenkins job log as a .txt file and any other relevant information.
**Additional context**
Add any other context about the problem here.
",1,ar bug report describe the bug a clear and concise description of what the bug is failed jenkins job information the name of the job that failed job build number and code snippit of the failure attachments attach the jenkins job log as a txt file and any other relevant information additional context add any other context about the problem here ,1
9715,30327371721.0,IssuesEvent,2023-07-11 01:55:35,astropy/astropy,https://api.github.com/repos/astropy/astropy,opened,pre-commit check cannot see missing import,Bug dev-automation,"pre-commit on the PR was green, and my editor hooked up to flake8 did not catch it anymore (it used to). What is going on here? This sounds like a bug in the pre-commit/ruff checks. I feel like we went overboard with such checks, making them so complicated that it is starting to fail because half the devs don't know how to read all the settings.
`E NameError: name 'nullcontext' is not defined`
(The problem above as I forgot to add `from contextlib import nullcontext` but it was not caught until CI ran.)
@nstarman , @eerovaher , or @WilliamJamieson , do you know how to fix this in the settings?",1.0,"pre-commit check cannot see missing import - pre-commit on the PR was green, and my editor hooked up to flake8 did not catch it anymore (it used to). What is going on here? This sounds like a bug in the pre-commit/ruff checks. I feel like we went overboard with such checks, making them so complicated that it is starting to fail because half the devs don't know how to read all the settings.
`E NameError: name 'nullcontext' is not defined`
(The problem above as I forgot to add `from contextlib import nullcontext` but it was not caught until CI ran.)
@nstarman , @eerovaher , or @WilliamJamieson , do you know how to fix this in the settings?",1,pre commit check cannot see missing import pre commit on the pr was green and my editor hooked up to did not catch it anymore it used to what is going on here this sounds like a bug in the pre commit ruff checks i feel like we went overboard with such checks making them so complicated that it is starting to fail because half the devs don t know how to read all the settings e nameerror name nullcontext is not defined the problem above as i forgot to add from contextlib import nullcontext but it was not caught until ci ran nstarman eerovaher or williamjamieson do you know how to fix this in the settings ,1
5180,18821302320.0,IssuesEvent,2021-11-10 08:39:04,home-assistant/core,https://api.github.com/repos/home-assistant/core,closed,trigger.to_state.context only filled in on dimmer light.entitites?,integration: automation,"### The problem
I am using the fact that the ""trigger.to_state.context.user_id != None"" for several of my light entities, when they are manually modified via lovelace UI, in order to disable automations from overriding the manual settings (for a while)
But it seems that the value is only filled in for some light entitites never get the context value filled in for some reason?
I've got four qubino dimmers, and a few qubino switches. The switches are manually added configged as lights entities as well.
I then have an automation that triggers if the dimmers or switches change (as included in examples), which works fine for the dimmer lights, **but not for the two switches**, because it turns out the trigger doesn't contain the context value in the to_state, as it does for the dimmers.
The full trigger for a switch, contains:
`
{'id': '1', 'idx': '1', 'platform': 'state', 'entity_id': 'switch.taklampa_kallartrappa', 'from_state': , 'to_state': , 'for': datetime.timedelta(seconds=1), 'attribute': None, 'description': 'state of switch.taklampa_kallartrappa'}
`
### What is version of Home Assistant Core has the issue?
2021.9.7
### What was the last working version of Home Assistant Core?
_No response_
### What type of installation are you running?
Home Assistant Container
### Integration causing the issue
automation
### Link to integration documentation on our website
_No response_
### Example YAML snippet
```yaml
alias: AutoLights Disable on Manual Override
description: ''
trigger:
- platform: state
entity_id: light.flush_dimmer_koket
attribute: brightness
for:
hours: 0
minutes: 0
seconds: 1
milliseconds: 0
- platform: state
attribute: brightness
entity_id: light.flush_dimmer_sovrummet
for:
hours: 0
minutes: 0
seconds: 1
milliseconds: 0
- platform: state
entity_id: light.flush_dimmer_vardagsrum_plus
attribute: brightness
for:
hours: 0
minutes: 0
seconds: 1
milliseconds: 0
- platform: state
entity_id: light.tvattstuga
to: 'on'
for:
hours: 0
minutes: 0
seconds: 1
milliseconds: 0
from: 'off'
condition:
- condition: template
value_template: '{{ trigger.to_state.context.user_id != None }}'
action:
- service: timer.start
data:
duration: '01:30:00'
target:
entity_id: |
{% if trigger.entity_id is search (""vardagsrum"") -%}
timer.autolights_disable_vrum
{% elif trigger.entity_id is search (""kitchen|koket"") -%}
timer.autolights_disable_kitchen
{% elif trigger.entity_id is search(""sovrum"") -%}
timer.autolights_disable_sovrum
{% elif trigger.entity_id is search(""tvattstuga"") -%}
timer.autolights_disable_tvattstuga
{%- else -%}
timer.DoesntExistFailure
{%- endif -%}
mode: single
```
### Anything in the logs that might be useful for us?
_No response_
### Additional information
I've tried switching between using the *switch* instead of the created light.entitiy, but no change.",1.0,"trigger.to_state.context only filled in on dimmer light.entitites? - ### The problem
I am using the fact that the ""trigger.to_state.context.user_id != None"" for several of my light entities, when they are manually modified via lovelace UI, in order to disable automations from overriding the manual settings (for a while)
But it seems that the value is only filled in for some light entitites never get the context value filled in for some reason?
I've got four qubino dimmers, and a few qubino switches. The switches are manually added configged as lights entities as well.
I then have an automation that triggers if the dimmers or switches change (as included in examples), which works fine for the dimmer lights, **but not for the two switches**, because it turns out the trigger doesn't contain the context value in the to_state, as it does for the dimmers.
The full trigger for a switch, contains:
`
{'id': '1', 'idx': '1', 'platform': 'state', 'entity_id': 'switch.taklampa_kallartrappa', 'from_state': , 'to_state': , 'for': datetime.timedelta(seconds=1), 'attribute': None, 'description': 'state of switch.taklampa_kallartrappa'}
`
### What is version of Home Assistant Core has the issue?
2021.9.7
### What was the last working version of Home Assistant Core?
_No response_
### What type of installation are you running?
Home Assistant Container
### Integration causing the issue
automation
### Link to integration documentation on our website
_No response_
### Example YAML snippet
```yaml
alias: AutoLights Disable on Manual Override
description: ''
trigger:
- platform: state
entity_id: light.flush_dimmer_koket
attribute: brightness
for:
hours: 0
minutes: 0
seconds: 1
milliseconds: 0
- platform: state
attribute: brightness
entity_id: light.flush_dimmer_sovrummet
for:
hours: 0
minutes: 0
seconds: 1
milliseconds: 0
- platform: state
entity_id: light.flush_dimmer_vardagsrum_plus
attribute: brightness
for:
hours: 0
minutes: 0
seconds: 1
milliseconds: 0
- platform: state
entity_id: light.tvattstuga
to: 'on'
for:
hours: 0
minutes: 0
seconds: 1
milliseconds: 0
from: 'off'
condition:
- condition: template
value_template: '{{ trigger.to_state.context.user_id != None }}'
action:
- service: timer.start
data:
duration: '01:30:00'
target:
entity_id: |
{% if trigger.entity_id is search (""vardagsrum"") -%}
timer.autolights_disable_vrum
{% elif trigger.entity_id is search (""kitchen|koket"") -%}
timer.autolights_disable_kitchen
{% elif trigger.entity_id is search(""sovrum"") -%}
timer.autolights_disable_sovrum
{% elif trigger.entity_id is search(""tvattstuga"") -%}
timer.autolights_disable_tvattstuga
{%- else -%}
timer.DoesntExistFailure
{%- endif -%}
mode: single
```
### Anything in the logs that might be useful for us?
_No response_
### Additional information
I've tried switching between using the *switch* instead of the created light.entitiy, but no change.",1,trigger to state context only filled in on dimmer light entitites the problem i am using the fact that the trigger to state context user id none for several of my light entities when they are manually modified via lovelace ui in order to disable automations from overriding the manual settings for a while but it seems that the value is only filled in for some light entitites never get the context value filled in for some reason i ve got four qubino dimmers and a few qubino switches the switches are manually added configged as lights entities as well i then have an automation that triggers if the dimmers or switches change as included in examples which works fine for the dimmer lights but not for the two switches because it turns out the trigger doesn t contain the context value in the to state as it does for the dimmers the full trigger for a switch contains id idx platform state entity id switch taklampa kallartrappa from state to state for datetime timedelta seconds attribute none description state of switch taklampa kallartrappa what is version of home assistant core has the issue what was the last working version of home assistant core no response what type of installation are you running home assistant container integration causing the issue automation link to integration documentation on our website no response example yaml snippet yaml alias autolights disable on manual override description trigger platform state entity id light flush dimmer koket attribute brightness for hours minutes seconds milliseconds platform state attribute brightness entity id light flush dimmer sovrummet for hours minutes seconds milliseconds platform state entity id light flush dimmer vardagsrum plus attribute brightness for hours minutes seconds milliseconds platform state entity id light tvattstuga to on for hours minutes seconds milliseconds from off condition condition template value template trigger to state context user id none action service timer start data duration target entity id if trigger entity id is search vardagsrum timer autolights disable vrum elif trigger entity id is search kitchen koket timer autolights disable kitchen elif trigger entity id is search sovrum timer autolights disable sovrum elif trigger entity id is search tvattstuga timer autolights disable tvattstuga else timer doesntexistfailure endif mode single anything in the logs that might be useful for us no response additional information i ve tried switching between using the switch instead of the created light entitiy but no change ,1
8152,26282663827.0,IssuesEvent,2023-01-07 13:31:23,ita-social-projects/TeachUA,https://api.github.com/repos/ita-social-projects/TeachUA,closed,"[Club, API] PATCH /api/club/{id} endpoint is not performing direct function of updating club",bug Backend Priority: Medium API Automation,"**Environment:** Windows 11, Google Chrome Version 108.0.5359.125 (Official Build) (64-bit).
**Reproducible:** always.
**Build found:** last commit [5757356](https://github.com/ita-social-projects/TeachUA/commit/57573565fd58d1553fa880a969c94f7cafa0204b)
**Preconditions**
1. Open Swagger UI.
**Steps to reproduce**
1. Go to 'club' section.
2. Click on '/api/club/{id}' endpoint.
3. Pay attention to the example value of the Request body.
**Actual result**
Endpoint updates the user who is assigned to the club.
**Expected result**
Based on the endpoint, it should update the club fields similar to the PUT method, but not all fields, only specific ones.
",1.0,"[Club, API] PATCH /api/club/{id} endpoint is not performing direct function of updating club - **Environment:** Windows 11, Google Chrome Version 108.0.5359.125 (Official Build) (64-bit).
**Reproducible:** always.
**Build found:** last commit [5757356](https://github.com/ita-social-projects/TeachUA/commit/57573565fd58d1553fa880a969c94f7cafa0204b)
**Preconditions**
1. Open Swagger UI.
**Steps to reproduce**
1. Go to 'club' section.
2. Click on '/api/club/{id}' endpoint.
3. Pay attention to the example value of the Request body.
**Actual result**
Endpoint updates the user who is assigned to the club.
**Expected result**
Based on the endpoint, it should update the club fields similar to the PUT method, but not all fields, only specific ones.
",1, patch api club id endpoint is not performing direct function of updating club environment windows google chrome version official build bit reproducible always build found last commit preconditions open swagger ui steps to reproduce go to club section click on api club id endpoint pay attention to the example value of the request body actual result endpoint updates the user who is assigned to the club img width alt image src expected result based on the endpoint it should update the club fields similar to the put method but not all fields only specific ones ,1
6468,23212778177.0,IssuesEvent,2022-08-02 11:40:56,submariner-io/releases,https://api.github.com/repos/submariner-io/releases,opened,Automate waiting for images to build,enhancement automation size:medium,"We currently have an ability to detect if any open PRs from a previous stage are still open.
On the same note, we could detect if necessary images are finished building.
We could either try to query the jobs on the CI, or piggy back on the dependency tracking bot.
For this, image building jobs for projects that build images for a specific tag could open an `automated` issue when they start, and close it when the job ends successfully.
We could then use ""Depends on"" (similar to #457) on the PR created by `make release` to track these ""tracker"" issues.
The only problem is that the E2E would still fail, but that could easily be manually re-run (and perhaps this can be further automated in the future).",1.0,"Automate waiting for images to build - We currently have an ability to detect if any open PRs from a previous stage are still open.
On the same note, we could detect if necessary images are finished building.
We could either try to query the jobs on the CI, or piggy back on the dependency tracking bot.
For this, image building jobs for projects that build images for a specific tag could open an `automated` issue when they start, and close it when the job ends successfully.
We could then use ""Depends on"" (similar to #457) on the PR created by `make release` to track these ""tracker"" issues.
The only problem is that the E2E would still fail, but that could easily be manually re-run (and perhaps this can be further automated in the future).",1,automate waiting for images to build we currently have an ability to detect if any open prs from a previous stage are still open on the same note we could detect if necessary images are finished building we could either try to query the jobs on the ci or piggy back on the dependency tracking bot for this image building jobs for projects that build images for a specific tag could open an automated issue when they start and close it when the job ends successfully we could then use depends on similar to on the pr created by make release to track these tracker issues the only problem is that the would still fail but that could easily be manually re run and perhaps this can be further automated in the future ,1
144963,22586942094.0,IssuesEvent,2022-06-28 16:00:13,blockframes/blockframes,https://api.github.com/repos/blockframes/blockframes,closed,Calendar Improvements,App - Festival 🎪 Design - UX July clean up,"_Estimated priority: medium, but can tend to high_
List of improvements / features that wireframes should reflect :
- [ ] able the list view for calendar (prepare screens for switching);
- [ ] prepare the list view screen;
- [ ] block user if wants to create an event in the past (+ what message to show?);
- [ ] able different period views (day, week, month ?) (prepare screens for each).",1.0,"Calendar Improvements - _Estimated priority: medium, but can tend to high_
List of improvements / features that wireframes should reflect :
- [ ] able the list view for calendar (prepare screens for switching);
- [ ] prepare the list view screen;
- [ ] block user if wants to create an event in the past (+ what message to show?);
- [ ] able different period views (day, week, month ?) (prepare screens for each).",0,calendar improvements estimated priority medium but can tend to high list of improvements features that wireframes should reflect able the list view for calendar prepare screens for switching prepare the list view screen block user if wants to create an event in the past what message to show able different period views day week month prepare screens for each ,0
789977,27811804032.0,IssuesEvent,2023-03-18 07:40:02,AY2223S2-CS2103T-W11-3/tp,https://api.github.com/repos/AY2223S2-CS2103T-W11-3/tp,closed,Update Card::isSameCard to check for both question and answer of the card,priority.Low,"This means a unique card is defined not just by the question, but also the answer.
Enables users to have the same question but with different answers. Useful for situations where the same question might have different answer under different contexts - e.g. What is a bat? (Deck - Baseball vs Deck - Mammals)",1.0,"Update Card::isSameCard to check for both question and answer of the card - This means a unique card is defined not just by the question, but also the answer.
Enables users to have the same question but with different answers. Useful for situations where the same question might have different answer under different contexts - e.g. What is a bat? (Deck - Baseball vs Deck - Mammals)",0,update card issamecard to check for both question and answer of the card this means a unique card is defined not just by the question but also the answer enables users to have the same question but with different answers useful for situations where the same question might have different answer under different contexts e g what is a bat deck baseball vs deck mammals ,0
278964,30702429141.0,IssuesEvent,2023-07-27 01:29:27,nidhi7598/linux-3.0.35_CVE-2018-13405,https://api.github.com/repos/nidhi7598/linux-3.0.35_CVE-2018-13405,closed,CVE-2020-29660 (Medium) detected in linux-stable-rtv3.8.6 - autoclosed,Mend: dependency security vulnerability,"## CVE-2020-29660 - Medium Severity Vulnerability
Vulnerable Library - linux-stable-rtv3.8.6
A locking inconsistency issue was discovered in the tty subsystem of the Linux kernel through 5.9.13. drivers/tty/tty_io.c and drivers/tty/tty_jobctrl.c may allow a read-after-free attack against TIOCGSID, aka CID-c8bcd9c5be24.
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Local
- Attack Complexity: Low
- Privileges Required: High
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: High
- Integrity Impact: None
- Availability Impact: None
For more information on CVSS3 Scores, click here.
Suggested Fix
Type: Upgrade version
Release Date: 2020-12-09
Fix Resolution: v5.10-rc7
***
Step up your Open Source Security Game with Mend [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)",True,"CVE-2020-29660 (Medium) detected in linux-stable-rtv3.8.6 - autoclosed - ## CVE-2020-29660 - Medium Severity Vulnerability
Vulnerable Library - linux-stable-rtv3.8.6
A locking inconsistency issue was discovered in the tty subsystem of the Linux kernel through 5.9.13. drivers/tty/tty_io.c and drivers/tty/tty_jobctrl.c may allow a read-after-free attack against TIOCGSID, aka CID-c8bcd9c5be24.
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Local
- Attack Complexity: Low
- Privileges Required: High
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: High
- Integrity Impact: None
- Availability Impact: None
For more information on CVSS3 Scores, click here.
Suggested Fix
Type: Upgrade version
Release Date: 2020-12-09
Fix Resolution: v5.10-rc7
***
Step up your Open Source Security Game with Mend [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)",0,cve medium detected in linux stable autoclosed cve medium severity vulnerability vulnerable library linux stable julia cartwright s fork of linux stable rt git library home page a href found in head commit a href found in base branch master vulnerable source files drivers tty tty io c vulnerability details a locking inconsistency issue was discovered in the tty subsystem of the linux kernel through drivers tty tty io c and drivers tty tty jobctrl c may allow a read after free attack against tiocgsid aka cid publish date url a href cvss score details base score metrics exploitability metrics attack vector local attack complexity low privileges required high user interaction none scope unchanged impact metrics confidentiality impact high integrity impact none availability impact none for more information on scores click a href suggested fix type upgrade version release date fix resolution step up your open source security game with mend ,0
5914,21640437202.0,IssuesEvent,2022-05-05 18:11:49,willowtreeapps/vocable-ios,https://api.github.com/repos/willowtreeapps/vocable-ios,opened,Refactor PresetPhrasesTests to use injection data,automation,"This ticket is for refactoring the test functions in PresetPhrasesTests to use injected preset data.
This work is part of the overall effort outlined in https://github.com/willowtreeapps/vocable-ios/issues/590 (the parent ticket to this one)
Injection data is hardcoded as type `Presets` in setup(). The ids are assigned to the cell element for category and phrase.
`func injectPresetData() -> Presets {
return Presets {
Category(id: ""general_category"", ""General"") {
Phrase(id: ""general_be_patient"", ""Please be patient"")
Phrase(id: ""general_donde_estoy"", languageCode: ""es"", ""No sé donde estoy"")
}`
**Acceptance Criteria**:
All tests in SettingsScreenTests pass",1.0,"Refactor PresetPhrasesTests to use injection data - This ticket is for refactoring the test functions in PresetPhrasesTests to use injected preset data.
This work is part of the overall effort outlined in https://github.com/willowtreeapps/vocable-ios/issues/590 (the parent ticket to this one)
Injection data is hardcoded as type `Presets` in setup(). The ids are assigned to the cell element for category and phrase.
`func injectPresetData() -> Presets {
return Presets {
Category(id: ""general_category"", ""General"") {
Phrase(id: ""general_be_patient"", ""Please be patient"")
Phrase(id: ""general_donde_estoy"", languageCode: ""es"", ""No sé donde estoy"")
}`
**Acceptance Criteria**:
All tests in SettingsScreenTests pass",1,refactor presetphrasestests to use injection data this ticket is for refactoring the test functions in presetphrasestests to use injected preset data this work is part of the overall effort outlined in the parent ticket to this one injection data is hardcoded as type presets in setup the ids are assigned to the cell element for category and phrase func injectpresetdata presets return presets category id general category general phrase id general be patient please be patient phrase id general donde estoy languagecode es no sé donde estoy acceptance criteria all tests in settingsscreentests pass,1
3087,13062864627.0,IssuesEvent,2020-07-30 15:44:02,geosolutions-it/geoserver,https://api.github.com/repos/geosolutions-it/geoserver,closed,WFS 1.0 test package build,CITE CITE_AUTOMATION,"Build to deploy in repositories the test suite for this protocol.
* Repository: https://github.com/opengeospatial/ets-wfs10
* Version, latest
Parametes: both repo and branch/tag to build
Deploy: on OSGeo
Setup for the deploy:
```
falseosgeoOpen Source Geospatial Foundation - WebDAV uploaddav:http://download.osgeo.org/upload/geotools/
```
",1.0,"WFS 1.0 test package build - Build to deploy in repositories the test suite for this protocol.
* Repository: https://github.com/opengeospatial/ets-wfs10
* Version, latest
Parametes: both repo and branch/tag to build
Deploy: on OSGeo
Setup for the deploy:
```
falseosgeoOpen Source Geospatial Foundation - WebDAV uploaddav:http://download.osgeo.org/upload/geotools/
```
",1,wfs test package build build to deploy in repositories the test suite for this protocol repository version latest parametes both repo and branch tag to build deploy on osgeo setup for the deploy false osgeo open source geospatial foundation webdav upload dav ,1
441798,30799639382.0,IssuesEvent,2023-07-31 23:31:52,risingwavelabs/risingwave-docs,https://api.github.com/repos/risingwavelabs/risingwave-docs,opened,`access_key` and `secret_key` are required fields for AWS auth ,documentation,"### Related code PR
https://github.com/risingwavelabs/risingwave/pull/11120
### Which part(s) of the docs might be affected or should be updated? And how?
Document that `access_key` and `secret_key` are required fields for sources and sinks that use AWS auth
### Reference
_No response_",1.0,"`access_key` and `secret_key` are required fields for AWS auth - ### Related code PR
https://github.com/risingwavelabs/risingwave/pull/11120
### Which part(s) of the docs might be affected or should be updated? And how?
Document that `access_key` and `secret_key` are required fields for sources and sinks that use AWS auth
### Reference
_No response_",0, access key and secret key are required fields for aws auth related code pr which part s of the docs might be affected or should be updated and how document that access key and secret key are required fields for sources and sinks that use aws auth reference no response ,0
109222,16833831118.0,IssuesEvent,2021-06-18 09:16:17,AlexRogalskiy/qiitos,https://api.github.com/repos/AlexRogalskiy/qiitos,opened,CVE-2020-7753 (High) detected in trim-0.0.1.tgz,security vulnerability,"## CVE-2020-7753 - High Severity Vulnerability
Vulnerable Library - trim-0.0.1.tgz
***
Step up your Open Source Security Game with WhiteSource [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)",True,"CVE-2020-7753 (High) detected in trim-0.0.1.tgz - ## CVE-2020-7753 - High Severity Vulnerability
Vulnerable Library - trim-0.0.1.tgz
***
Step up your Open Source Security Game with WhiteSource [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)",0,cve high detected in trim tgz cve high severity vulnerability vulnerable library trim tgz trim string whitespace library home page a href path to dependency file qiitos package json path to vulnerable library qiitos node modules trim package json dependency hierarchy remark preset davidtheclark tgz root library remark cli tgz remark tgz remark parse tgz x trim tgz vulnerable library found in head commit a href vulnerability details all versions of package trim are vulnerable to regular expression denial of service redos via trim publish date url a href cvss score details base score metrics exploitability metrics attack vector network attack complexity low privileges required none user interaction none scope unchanged impact metrics confidentiality impact none integrity impact none availability impact high for more information on scores click a href suggested fix type upgrade version origin a href release date fix resolution trim step up your open source security game with whitesource ,0
135066,19485475379.0,IssuesEvent,2021-12-26 09:21:15,zainfathoni/kelas.rumahberbagi.com,https://api.github.com/repos/zainfathoni/kelas.rumahberbagi.com,closed,CTA,enhancement design ui,"## Description
Call to action to purchase the course.
## Narrative
- **As an** authenticated user
- **I want** it to be obvious how to purchase the course
- **so that** I can start the purchase transaction flow easily.
## Acceptance Criteria
[Dashboard page](app/routes/dashboard.tsx) should render this [Single price with details](https://tailwindui.com/components/marketing/sections/pricing#component-56cbd4f191ac0d54e5a5c0287481d5b9) call to action.

### Scenario 1
- **Given** I am an authenticated user,
- **and** I have not purchased the course yet,
- **when** I click the CTA button,
- **then** it redirects to the `/dashboard/purchase` route for me to start the transaction.
## Implementation Model
Code snippet
```jsx
/* This example requires Tailwind CSS v2.0+ */
import { CheckCircleIcon } from '@heroicons/react/solid'
const includedFeatures = [
'Private forum access',
'Member resources',
'Entry to annual conference',
'Official member t-shirt',
]
export default function Example() {
return (
Simple no-tricks pricing
If you're not satisfied, contact us within the first 14 days and we'll send you a full refund.
Lifetime Membership
Lorem ipsum dolor sit amet consect etur adipisicing elit. Itaque amet indis perferendis blanditiis
repellendus etur quidem assumenda.
)
}
```
## Tasks
- [ ] Render the [CTA](https://tailwindui.com/components/marketing/sections/pricing#component-56cbd4f191ac0d54e5a5c0287481d5b9) inside the content section of the [dashboard.tsx](app/routes/dashboard.tsx) page
- [ ] Implement the redirect action to `/dashboard/purchase` route
- [ ] Implement an empty `/dashboard/purchase` route page
- [ ] Write an end-to-end test case for Scenario 1 under `e2e/cta.spec.ts` file
- [ ] Move the edit profile functionality out of the `/dashboard/index.tsx` page and put it in the `/dashboard/settings.tsx` route instead.
- [ ] Update the e2e tests accordingly while preserving the edit profile scenarios and functionality.",1.0,"CTA - ## Description
Call to action to purchase the course.
## Narrative
- **As an** authenticated user
- **I want** it to be obvious how to purchase the course
- **so that** I can start the purchase transaction flow easily.
## Acceptance Criteria
[Dashboard page](app/routes/dashboard.tsx) should render this [Single price with details](https://tailwindui.com/components/marketing/sections/pricing#component-56cbd4f191ac0d54e5a5c0287481d5b9) call to action.

### Scenario 1
- **Given** I am an authenticated user,
- **and** I have not purchased the course yet,
- **when** I click the CTA button,
- **then** it redirects to the `/dashboard/purchase` route for me to start the transaction.
## Implementation Model
Code snippet
```jsx
/* This example requires Tailwind CSS v2.0+ */
import { CheckCircleIcon } from '@heroicons/react/solid'
const includedFeatures = [
'Private forum access',
'Member resources',
'Entry to annual conference',
'Official member t-shirt',
]
export default function Example() {
return (
Simple no-tricks pricing
If you're not satisfied, contact us within the first 14 days and we'll send you a full refund.
Lifetime Membership
Lorem ipsum dolor sit amet consect etur adipisicing elit. Itaque amet indis perferendis blanditiis
repellendus etur quidem assumenda.
)
}
```
## Tasks
- [ ] Render the [CTA](https://tailwindui.com/components/marketing/sections/pricing#component-56cbd4f191ac0d54e5a5c0287481d5b9) inside the content section of the [dashboard.tsx](app/routes/dashboard.tsx) page
- [ ] Implement the redirect action to `/dashboard/purchase` route
- [ ] Implement an empty `/dashboard/purchase` route page
- [ ] Write an end-to-end test case for Scenario 1 under `e2e/cta.spec.ts` file
- [ ] Move the edit profile functionality out of the `/dashboard/index.tsx` page and put it in the `/dashboard/settings.tsx` route instead.
- [ ] Update the e2e tests accordingly while preserving the edit profile scenarios and functionality.",0,cta description call to action to purchase the course narrative as an authenticated user i want it to be obvious how to purchase the course so that i can start the purchase transaction flow easily acceptance criteria app routes dashboard tsx should render this call to action scenario given i am an authenticated user and i have not purchased the course yet when i click the cta button then it redirects to the dashboard purchase route for me to start the transaction implementation model code snippet jsx this example requires tailwind css import checkcircleicon from heroicons react solid const includedfeatures private forum access member resources entry to annual conference official member t shirt export default function example return simple no tricks pricing if you re not satisfied contact us within the first days and we ll send you a full refund lifetime membership lorem ipsum dolor sit amet consect etur adipisicing elit itaque amet indis perferendis blanditiis repellendus etur quidem assumenda what s included includedfeatures map feature feature pay once own it forever usd learn about our membership policy a href classname flex items center justify center px py border border transparent text base font medium rounded md text white bg gray hover bg gray get access get a free sample tasks render the inside the content section of the app routes dashboard tsx page implement the redirect action to dashboard purchase route implement an empty dashboard purchase route page write an end to end test case for scenario under cta spec ts file move the edit profile functionality out of the dashboard index tsx page and put it in the dashboard settings tsx route instead update the tests accordingly while preserving the edit profile scenarios and functionality ,0
10231,32030411137.0,IssuesEvent,2023-09-22 11:56:50,dcaribou/transfermarkt-datasets,https://api.github.com/repos/dcaribou/transfermarkt-datasets,opened,Add useful git hooks,automations,"Git hooks can be useful to avoid committing untested components.
For example [dbt-checkpoint](https://github.com/dbt-checkpoint/dbt-checkpoint) can be configured to run and test dbt models before they get committed.",1.0,"Add useful git hooks - Git hooks can be useful to avoid committing untested components.
For example [dbt-checkpoint](https://github.com/dbt-checkpoint/dbt-checkpoint) can be configured to run and test dbt models before they get committed.",1,add useful git hooks git hooks can be useful to avoid committing untested components for example can be configured to run and test dbt models before they get committed ,1
5877,21529745279.0,IssuesEvent,2022-04-28 22:41:51,rancher-sandbox/rancher-desktop,https://api.github.com/repos/rancher-sandbox/rancher-desktop,closed,rdctl start doesn't return and doesn't change container engine,kind/bug platform/windows area/automation,"Before I shut down RD I was running it with the moby runtime. Then I started it up with from the CLI:
```console
PS C:\Users\Jan\Downloads> rdctl start --container-engine containerd
About to launch C:\Users\Jan\AppData\Local/Programs/Rancher Desktop/Rancher Desktop.exe --kubernetes-container-engine containerd ...
[8380:0426/202102.226:ERROR:gpu_init.cc(446)] Passthrough is not supported, GL is disabled, ANGLE is
(node:7088) [DEP0123] DeprecationWarning: Setting the TLS ServerName to an IP address is not permitted by RFC 6066. This will be ignored in a future version.
(Use `Rancher Desktop --trace-deprecation ...` to show where the warning was created)
```
It does start up RD again, but it did not change the container engine to `containerd`.
The `rdctl start` command also never returned to the command prompt. When I aborted it with Ctrl-C then RD was stopped as well (not just the Window, but also the background app).
Finally there is the issue of the noisy output, but that is secondary to the functional issues.",1.0,"rdctl start doesn't return and doesn't change container engine - Before I shut down RD I was running it with the moby runtime. Then I started it up with from the CLI:
```console
PS C:\Users\Jan\Downloads> rdctl start --container-engine containerd
About to launch C:\Users\Jan\AppData\Local/Programs/Rancher Desktop/Rancher Desktop.exe --kubernetes-container-engine containerd ...
[8380:0426/202102.226:ERROR:gpu_init.cc(446)] Passthrough is not supported, GL is disabled, ANGLE is
(node:7088) [DEP0123] DeprecationWarning: Setting the TLS ServerName to an IP address is not permitted by RFC 6066. This will be ignored in a future version.
(Use `Rancher Desktop --trace-deprecation ...` to show where the warning was created)
```
It does start up RD again, but it did not change the container engine to `containerd`.
The `rdctl start` command also never returned to the command prompt. When I aborted it with Ctrl-C then RD was stopped as well (not just the Window, but also the background app).
Finally there is the issue of the noisy output, but that is secondary to the functional issues.",1,rdctl start doesn t return and doesn t change container engine before i shut down rd i was running it with the moby runtime then i started it up with from the cli console ps c users jan downloads rdctl start container engine containerd about to launch c users jan appdata local programs rancher desktop rancher desktop exe kubernetes container engine containerd passthrough is not supported gl is disabled angle is node deprecationwarning setting the tls servername to an ip address is not permitted by rfc this will be ignored in a future version use rancher desktop trace deprecation to show where the warning was created it does start up rd again but it did not change the container engine to containerd the rdctl start command also never returned to the command prompt when i aborted it with ctrl c then rd was stopped as well not just the window but also the background app finally there is the issue of the noisy output but that is secondary to the functional issues ,1
280312,30820648222.0,IssuesEvent,2023-08-01 16:05:39,momo-tong/jackson-databind-2.13.0,https://api.github.com/repos/momo-tong/jackson-databind-2.13.0,opened,jackson-databind-2.13.0.jar: 5 vulnerabilities (highest severity is: 7.5),Mend: dependency security vulnerability," Vulnerable Library - jackson-databind-2.13.0.jar
General data-binding functionality for Jackson: works on core streaming API
In FasterXML jackson-databind before 2.13.4, resource exhaustion can occur because of a lack of a check in BeanDeserializer._deserializeFromArray to prevent use of deeply nested arrays. An application is vulnerable only with certain customized choices for deserialization.
For more information on CVSS3 Scores, click here.
### Suggested Fix
Type: Upgrade version
Release Date: 2022-10-02
Fix Resolution: 2.13.4
Step up your Open Source Security Game with Mend [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)
CVE-2022-42003
### Vulnerable Library - jackson-databind-2.13.0.jar
General data-binding functionality for Jackson: works on core streaming API
In FasterXML jackson-databind before 2.14.0-rc1, resource exhaustion can occur because of a lack of a check in primitive value deserializers to avoid deep wrapper array nesting, when the UNWRAP_SINGLE_VALUE_ARRAYS feature is enabled. Additional fix version in 2.13.4.1 and 2.12.17.1
For more information on CVSS3 Scores, click here.
### Suggested Fix
Type: Upgrade version
Release Date: 2022-10-02
Fix Resolution: 2.13.4.1
Step up your Open Source Security Game with Mend [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)
CVE-2020-36518
### Vulnerable Library - jackson-databind-2.13.0.jar
General data-binding functionality for Jackson: works on core streaming API
jackson-databind before 2.13.0 allows a Java StackOverflow exception and denial of service via a large depth of nested objects.
Mend Note: After conducting further research, Mend has determined that all versions of com.fasterxml.jackson.core:jackson-databind up to version 2.13.2 are vulnerable to CVE-2020-36518.
For more information on CVSS3 Scores, click here.
### Suggested Fix
Type: Upgrade version
Release Date: 2022-03-11
Fix Resolution: 2.13.2.1
Step up your Open Source Security Game with Mend [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)
CVE-2021-46877
### Vulnerable Library - jackson-databind-2.13.0.jar
General data-binding functionality for Jackson: works on core streaming API
jackson-databind 2.10.x through 2.12.x before 2.12.6 and 2.13.x before 2.13.1 allows attackers to cause a denial of service (2 GB transient heap usage per read) in uncommon situations involving JsonNode JDK serialization.
Step up your Open Source Security Game with Mend [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)
WS-2021-0616
### Vulnerable Library - jackson-databind-2.13.0.jar
General data-binding functionality for Jackson: works on core streaming API
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: High
- Privileges Required: None
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: None
- Integrity Impact: None
- Availability Impact: High
For more information on CVSS3 Scores, click here.
### Suggested Fix
Type: Upgrade version
Release Date: 2021-11-20
Fix Resolution: 2.13.1
Step up your Open Source Security Game with Mend [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)
",True,"jackson-databind-2.13.0.jar: 5 vulnerabilities (highest severity is: 7.5) - Vulnerable Library - jackson-databind-2.13.0.jar
General data-binding functionality for Jackson: works on core streaming API
In FasterXML jackson-databind before 2.13.4, resource exhaustion can occur because of a lack of a check in BeanDeserializer._deserializeFromArray to prevent use of deeply nested arrays. An application is vulnerable only with certain customized choices for deserialization.
For more information on CVSS3 Scores, click here.
### Suggested Fix
Type: Upgrade version
Release Date: 2022-10-02
Fix Resolution: 2.13.4
Step up your Open Source Security Game with Mend [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)
CVE-2022-42003
### Vulnerable Library - jackson-databind-2.13.0.jar
General data-binding functionality for Jackson: works on core streaming API
In FasterXML jackson-databind before 2.14.0-rc1, resource exhaustion can occur because of a lack of a check in primitive value deserializers to avoid deep wrapper array nesting, when the UNWRAP_SINGLE_VALUE_ARRAYS feature is enabled. Additional fix version in 2.13.4.1 and 2.12.17.1
For more information on CVSS3 Scores, click here.
### Suggested Fix
Type: Upgrade version
Release Date: 2022-10-02
Fix Resolution: 2.13.4.1
Step up your Open Source Security Game with Mend [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)
CVE-2020-36518
### Vulnerable Library - jackson-databind-2.13.0.jar
General data-binding functionality for Jackson: works on core streaming API
jackson-databind before 2.13.0 allows a Java StackOverflow exception and denial of service via a large depth of nested objects.
Mend Note: After conducting further research, Mend has determined that all versions of com.fasterxml.jackson.core:jackson-databind up to version 2.13.2 are vulnerable to CVE-2020-36518.
For more information on CVSS3 Scores, click here.
### Suggested Fix
Type: Upgrade version
Release Date: 2022-03-11
Fix Resolution: 2.13.2.1
Step up your Open Source Security Game with Mend [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)
CVE-2021-46877
### Vulnerable Library - jackson-databind-2.13.0.jar
General data-binding functionality for Jackson: works on core streaming API
jackson-databind 2.10.x through 2.12.x before 2.12.6 and 2.13.x before 2.13.1 allows attackers to cause a denial of service (2 GB transient heap usage per read) in uncommon situations involving JsonNode JDK serialization.
Step up your Open Source Security Game with Mend [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)
WS-2021-0616
### Vulnerable Library - jackson-databind-2.13.0.jar
General data-binding functionality for Jackson: works on core streaming API
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: High
- Privileges Required: None
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: None
- Integrity Impact: None
- Availability Impact: High
For more information on CVSS3 Scores, click here.
### Suggested Fix
Type: Upgrade version
Release Date: 2021-11-20
Fix Resolution: 2.13.1
Step up your Open Source Security Game with Mend [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)
",0,jackson databind jar vulnerabilities highest severity is vulnerable library jackson databind jar general data binding functionality for jackson works on core streaming api library home page a href path to dependency file pom xml path to vulnerable library home wss scanner repository com fasterxml jackson core jackson databind jackson databind jar found in head commit a href vulnerabilities cve severity cvss dependency type fixed in jackson databind version remediation available high jackson databind jar direct high jackson databind jar direct high jackson databind jar direct high jackson databind jar direct medium jackson databind jar direct details cve vulnerable library jackson databind jar general data binding functionality for jackson works on core streaming api library home page a href path to dependency file pom xml path to vulnerable library home wss scanner repository com fasterxml jackson core jackson databind jackson databind jar dependency hierarchy x jackson databind jar vulnerable library found in head commit a href found in base branch master vulnerability details in fasterxml jackson databind before resource exhaustion can occur because of a lack of a check in beandeserializer deserializefromarray to prevent use of deeply nested arrays an application is vulnerable only with certain customized choices for deserialization publish date url a href cvss score details base score metrics exploitability metrics attack vector network attack complexity low privileges required none user interaction none scope unchanged impact metrics confidentiality impact none integrity impact none availability impact high for more information on scores click a href suggested fix type upgrade version release date fix resolution step up your open source security game with mend cve vulnerable library jackson databind jar general data binding functionality for jackson works on core streaming api library home page a href path to dependency file pom xml path to vulnerable library home wss scanner repository com fasterxml jackson core jackson databind jackson databind jar dependency hierarchy x jackson databind jar vulnerable library found in head commit a href found in base branch master vulnerability details in fasterxml jackson databind before resource exhaustion can occur because of a lack of a check in primitive value deserializers to avoid deep wrapper array nesting when the unwrap single value arrays feature is enabled additional fix version in and publish date url a href cvss score details base score metrics exploitability metrics attack vector network attack complexity low privileges required none user interaction none scope unchanged impact metrics confidentiality impact none integrity impact none availability impact high for more information on scores click a href suggested fix type upgrade version release date fix resolution step up your open source security game with mend cve vulnerable library jackson databind jar general data binding functionality for jackson works on core streaming api library home page a href path to dependency file pom xml path to vulnerable library home wss scanner repository com fasterxml jackson core jackson databind jackson databind jar dependency hierarchy x jackson databind jar vulnerable library found in head commit a href found in base branch master vulnerability details jackson databind before allows a java stackoverflow exception and denial of service via a large depth of nested objects mend note after conducting further research mend has determined that all versions of com fasterxml jackson core jackson databind up to version are vulnerable to cve publish date url a href cvss score details base score metrics exploitability metrics attack vector network attack complexity low privileges required none user interaction none scope unchanged impact metrics confidentiality impact none integrity impact none availability impact high for more information on scores click a href suggested fix type upgrade version release date fix resolution step up your open source security game with mend cve vulnerable library jackson databind jar general data binding functionality for jackson works on core streaming api library home page a href path to dependency file pom xml path to vulnerable library home wss scanner repository com fasterxml jackson core jackson databind jackson databind jar dependency hierarchy x jackson databind jar vulnerable library found in head commit a href found in base branch master vulnerability details jackson databind x through x before and x before allows attackers to cause a denial of service gb transient heap usage per read in uncommon situations involving jsonnode jdk serialization publish date url a href cvss score details base score metrics exploitability metrics attack vector network attack complexity low privileges required none user interaction none scope unchanged impact metrics confidentiality impact none integrity impact none availability impact high for more information on scores click a href suggested fix type upgrade version origin a href release date fix resolution step up your open source security game with mend ws vulnerable library jackson databind jar general data binding functionality for jackson works on core streaming api library home page a href path to dependency file pom xml path to vulnerable library home wss scanner repository com fasterxml jackson core jackson databind jackson databind jar dependency hierarchy x jackson databind jar vulnerable library found in head commit a href found in base branch master vulnerability details fasterxml jackson databind before and there is dos when using jdk serialization to serialize jsonnode publish date url a href cvss score details base score metrics exploitability metrics attack vector network attack complexity high privileges required none user interaction none scope unchanged impact metrics confidentiality impact none integrity impact none availability impact high for more information on scores click a href suggested fix type upgrade version release date fix resolution step up your open source security game with mend ,0
8931,27241506061.0,IssuesEvent,2023-02-21 20:54:29,bcgov/api-services-portal,https://api.github.com/repos/bcgov/api-services-portal,closed,Cypress Tests for Authorization Profile Client Role,automation,"1. Set roles in Keycloak auth client
1.1 Authenticates Admin owner
1.2 Add ""Read"" role to the client of the authorization profile
1.3 Add ""Write"" role to the client of the authorization profile
2. Apply client roles to the Authorization Profile
2.1 Authenticates Wendy (Credential-Issuer)
2.2 Select the namespace created for client credential
2.3 Clear the Client Scope
2.4 Set the roles to the authorization profile
3. Developer creates an access request for Client ID/Secret authenticator to verify read role
3.1 Developer logs in
3.2 Creates an application
3.3 Creates an access request
4. Access manager apply ""Read"" role and approves developer access request
4.1 Access Manager logs in
4.2 Access Manager approves developer access request
4.3 Select scopes in Authorization Tab
4.4 approves an access request
5. Update Kong plugin and verify that only only GET method is allowed for Read role
5.1 Set allowed method ""GET"" in kong plugin
5.2 Set authorization roles in plugin file
5.3 Set allowed audience in plugin file
5.4 applies authorization plugin to service published to Kong Gateway
5.5 Make ""GET"" call and verify that Kong allows user to access the resources
5.6 Make ""POST"" call and verify that Kong does not allow user to access the resources
6. Developer creates an access request for Client ID/Secret authenticator to verify write role
6.1 Developer logs in
6.2 Creates an application
6.3 Creates an access request
7. Access manager apply ""Write"" role and approves developer access request
7.1 Access Manager logs in
7.2 Access Manager approves developer access request
7.3 Select ""Write"" roles in Authorization Tab
7.4 approves an access request
8. Update Kong plugin and verify that only only PUT and POST methods are allowed for Read role
8.1 Set allowed methods ""PUT"" and ""POST"" in kong plugin
8.2 Set authorization roles in plugin file
8.3 Set allowed audience in plugin file
8.4 applies authorization plugin to service published to Kong Gateway
8.5 Make ""GET"" call and verify that Kong does not allow user to access the resources
8.6 Make ""POST"" call and verify that Kong allows user to access the resources
8.7 Make ""PUT"" call and verify that Kong allows user to access the resources
",1.0,"Cypress Tests for Authorization Profile Client Role - 1. Set roles in Keycloak auth client
1.1 Authenticates Admin owner
1.2 Add ""Read"" role to the client of the authorization profile
1.3 Add ""Write"" role to the client of the authorization profile
2. Apply client roles to the Authorization Profile
2.1 Authenticates Wendy (Credential-Issuer)
2.2 Select the namespace created for client credential
2.3 Clear the Client Scope
2.4 Set the roles to the authorization profile
3. Developer creates an access request for Client ID/Secret authenticator to verify read role
3.1 Developer logs in
3.2 Creates an application
3.3 Creates an access request
4. Access manager apply ""Read"" role and approves developer access request
4.1 Access Manager logs in
4.2 Access Manager approves developer access request
4.3 Select scopes in Authorization Tab
4.4 approves an access request
5. Update Kong plugin and verify that only only GET method is allowed for Read role
5.1 Set allowed method ""GET"" in kong plugin
5.2 Set authorization roles in plugin file
5.3 Set allowed audience in plugin file
5.4 applies authorization plugin to service published to Kong Gateway
5.5 Make ""GET"" call and verify that Kong allows user to access the resources
5.6 Make ""POST"" call and verify that Kong does not allow user to access the resources
6. Developer creates an access request for Client ID/Secret authenticator to verify write role
6.1 Developer logs in
6.2 Creates an application
6.3 Creates an access request
7. Access manager apply ""Write"" role and approves developer access request
7.1 Access Manager logs in
7.2 Access Manager approves developer access request
7.3 Select ""Write"" roles in Authorization Tab
7.4 approves an access request
8. Update Kong plugin and verify that only only PUT and POST methods are allowed for Read role
8.1 Set allowed methods ""PUT"" and ""POST"" in kong plugin
8.2 Set authorization roles in plugin file
8.3 Set allowed audience in plugin file
8.4 applies authorization plugin to service published to Kong Gateway
8.5 Make ""GET"" call and verify that Kong does not allow user to access the resources
8.6 Make ""POST"" call and verify that Kong allows user to access the resources
8.7 Make ""PUT"" call and verify that Kong allows user to access the resources
",1,cypress tests for authorization profile client role set roles in keycloak auth client authenticates admin owner add read role to the client of the authorization profile add write role to the client of the authorization profile apply client roles to the authorization profile authenticates wendy credential issuer select the namespace created for client credential clear the client scope set the roles to the authorization profile developer creates an access request for client id secret authenticator to verify read role developer logs in creates an application creates an access request access manager apply read role and approves developer access request access manager logs in access manager approves developer access request select scopes in authorization tab approves an access request update kong plugin and verify that only only get method is allowed for read role set allowed method get in kong plugin set authorization roles in plugin file set allowed audience in plugin file applies authorization plugin to service published to kong gateway make get call and verify that kong allows user to access the resources make post call and verify that kong does not allow user to access the resources developer creates an access request for client id secret authenticator to verify write role developer logs in creates an application creates an access request access manager apply write role and approves developer access request access manager logs in access manager approves developer access request select write roles in authorization tab approves an access request update kong plugin and verify that only only put and post methods are allowed for read role set allowed methods put and post in kong plugin set authorization roles in plugin file set allowed audience in plugin file applies authorization plugin to service published to kong gateway make get call and verify that kong does not allow user to access the resources make post call and verify that kong allows user to access the resources make put call and verify that kong allows user to access the resources ,1
3709,14399673772.0,IssuesEvent,2020-12-03 11:15:43,Tithibots/tithiwa,https://api.github.com/repos/Tithibots/tithiwa,closed,Create remove_group_admins() in Group class Similar to make_group_admins(),Selenium Automation enhancement good first issue python,"Take a look at https://github.com/Tithibots/tithiwa/blob/7cef0e13b6ab6f8050060bb3cbaad0f59f79c19e/tithiwa/group.py#L50
Just need to click on the `Remove` button forgiven members",1.0,"Create remove_group_admins() in Group class Similar to make_group_admins() - Take a look at https://github.com/Tithibots/tithiwa/blob/7cef0e13b6ab6f8050060bb3cbaad0f59f79c19e/tithiwa/group.py#L50
Just need to click on the `Remove` button forgiven members",1,create remove group admins in group class similar to make group admins take a look at just need to click on the remove button forgiven members,1
650848,21419034510.0,IssuesEvent,2022-04-22 13:52:33,consta-design-system/uikit,https://api.github.com/repos/consta-design-system/uikit,opened,Table: доработки resize & scroll,feature 🔥 priority,"- [ ] Добавить свойство отвечающее за распределение свободного пространства:
- последнему столбцу (по умолчанию)
- задать номер столбца
- равномерно раскидать по всем столбцам
- [ ] Исправить баг со скроллом
",1.0,"Table: доработки resize & scroll - - [ ] Добавить свойство отвечающее за распределение свободного пространства:
- последнему столбцу (по умолчанию)
- задать номер столбца
- равномерно раскидать по всем столбцам
- [ ] Исправить баг со скроллом
",0,table доработки resize scroll добавить свойство отвечающее за распределение свободного пространства последнему столбцу по умолчанию задать номер столбца равномерно раскидать по всем столбцам исправить баг со скроллом img width alt снимок экрана в src img width alt снимок экрана в src ,0
677785,23175410258.0,IssuesEvent,2022-07-31 10:48:36,fredo-ai/Fredo-Public,https://api.github.com/repos/fredo-ai/Fredo-Public,closed,Image uploads to #project,priority-4,"When I upload an image and provide some #hashtag, I want the image to be put in the correct project list in Workflowy.
This might not be possible within our current bot system. Maybe in the future.",1.0,"Image uploads to #project - When I upload an image and provide some #hashtag, I want the image to be put in the correct project list in Workflowy.
This might not be possible within our current bot system. Maybe in the future.",0,image uploads to project when i upload an image and provide some hashtag i want the image to be put in the correct project list in workflowy this might not be possible within our current bot system maybe in the future ,0
1718,10596459909.0,IssuesEvent,2019-10-09 21:18:12,rancher/rancher,https://api.github.com/repos/rancher/rancher,opened,Automation - test cluster and project monitoring ,kind/task setup/automation,add tests for cluster and project monitoring ,1.0,Automation - test cluster and project monitoring - add tests for cluster and project monitoring ,1,automation test cluster and project monitoring add tests for cluster and project monitoring ,1
4566,16869629514.0,IssuesEvent,2021-06-22 01:23:13,mozilla-mobile/fenix,https://api.github.com/repos/mozilla-mobile/fenix,closed,Way to approve contributer pull requests for automation,P1 eng:automation wontfix,"Do to some configuration choices we do not run pull requests from people outside an approved group. It would be good to have some way to mark the PR as approved for testing.
Possibly a label or comment that triggers the automation?
Right now the only way I know of to run the automation would be to fork the users PR and submit a request under an approved member.",1.0,"Way to approve contributer pull requests for automation - Do to some configuration choices we do not run pull requests from people outside an approved group. It would be good to have some way to mark the PR as approved for testing.
Possibly a label or comment that triggers the automation?
Right now the only way I know of to run the automation would be to fork the users PR and submit a request under an approved member.",1,way to approve contributer pull requests for automation do to some configuration choices we do not run pull requests from people outside an approved group it would be good to have some way to mark the pr as approved for testing possibly a label or comment that triggers the automation right now the only way i know of to run the automation would be to fork the users pr and submit a request under an approved member ,1
4042,15242574087.0,IssuesEvent,2021-02-19 10:03:26,home-assistant/frontend,https://api.github.com/repos/home-assistant/frontend,closed,"""delay: 5"" in yaml is converted to 5 hours in UI editor",bug editor: automation,"**Checklist**
- [X] I have updated to the latest available Home Assistant version.
- [X] I have cleared the cache of my browser.
- [X] I have tried a different browser to see if it is related to my browser.
**Describe the issue you are experiencing**
If the action ```- delay: 5``` is written in YAML and the automation is loaded in the UI editor, the editor will indicate that the delay is 5 hours. (```05:00:00:000```).
**Describe the behavior you expected**
I would expect it to be 5 seconds.
**Steps to reproduce the issue**
1. Create a delay in YAML
2. Open in UI automation/script-editor
**What version of Home Assistant Core has the issue?**
core-2021.2.3
**What was the last working version of Home Assistant Core?**
_No response_
**In which browser are you experiencing the issue with?**
Google Chrome 88.0.4324.150 / Android companion app beta-580-5ee48f2-full
**Which operating system are you using to run this browser?**
Windows 10 / Android
**State of relevant entities**
```yaml
# Paste your state here.
```
**Problem-relevant frontend configuration**
```yaml
- delay: 5
```
**Javascript errors shown in your browser console/inspector**
```txt
# Paste your logs here.
```
",1.0,"""delay: 5"" in yaml is converted to 5 hours in UI editor - **Checklist**
- [X] I have updated to the latest available Home Assistant version.
- [X] I have cleared the cache of my browser.
- [X] I have tried a different browser to see if it is related to my browser.
**Describe the issue you are experiencing**
If the action ```- delay: 5``` is written in YAML and the automation is loaded in the UI editor, the editor will indicate that the delay is 5 hours. (```05:00:00:000```).
**Describe the behavior you expected**
I would expect it to be 5 seconds.
**Steps to reproduce the issue**
1. Create a delay in YAML
2. Open in UI automation/script-editor
**What version of Home Assistant Core has the issue?**
core-2021.2.3
**What was the last working version of Home Assistant Core?**
_No response_
**In which browser are you experiencing the issue with?**
Google Chrome 88.0.4324.150 / Android companion app beta-580-5ee48f2-full
**Which operating system are you using to run this browser?**
Windows 10 / Android
**State of relevant entities**
```yaml
# Paste your state here.
```
**Problem-relevant frontend configuration**
```yaml
- delay: 5
```
**Javascript errors shown in your browser console/inspector**
```txt
# Paste your logs here.
```
",1, delay in yaml is converted to hours in ui editor checklist i have updated to the latest available home assistant version i have cleared the cache of my browser i have tried a different browser to see if it is related to my browser describe the issue you are experiencing if the action delay is written in yaml and the automation is loaded in the ui editor the editor will indicate that the delay is hours describe the behavior you expected i would expect it to be seconds steps to reproduce the issue create a delay in yaml open in ui automation script editor what version of home assistant core has the issue core what was the last working version of home assistant core no response in which browser are you experiencing the issue with google chrome android companion app beta full which operating system are you using to run this browser windows android state of relevant entities yaml paste your state here problem relevant frontend configuration yaml delay javascript errors shown in your browser console inspector txt paste your logs here ,1
287765,31856358344.0,IssuesEvent,2023-09-15 07:45:24,Trinadh465/linux-4.1.15_CVE-2023-26607,https://api.github.com/repos/Trinadh465/linux-4.1.15_CVE-2023-26607,opened,CVE-2020-13974 (High) detected in linuxlinux-4.6,Mend: dependency security vulnerability,"## CVE-2020-13974 - High Severity Vulnerability
Vulnerable Library - linuxlinux-4.6
An issue was discovered in the Linux kernel 4.4 through 5.7.1. drivers/tty/vt/keyboard.c has an integer overflow if k_ascii is called several times in a row, aka CID-b86dab054059. NOTE: Members in the community argue that the integer overflow does not lead to a security issue in this case.
***
Step up your Open Source Security Game with Mend [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)",True,"CVE-2020-13974 (High) detected in linuxlinux-4.6 - ## CVE-2020-13974 - High Severity Vulnerability
Vulnerable Library - linuxlinux-4.6
An issue was discovered in the Linux kernel 4.4 through 5.7.1. drivers/tty/vt/keyboard.c has an integer overflow if k_ascii is called several times in a row, aka CID-b86dab054059. NOTE: Members in the community argue that the integer overflow does not lead to a security issue in this case.
***
Step up your Open Source Security Game with Mend [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)",0,cve high detected in linuxlinux cve high severity vulnerability vulnerable library linuxlinux the linux kernel library home page a href found in head commit a href found in base branch main vulnerable source files drivers tty vt keyboard c drivers tty vt keyboard c vulnerability details an issue was discovered in the linux kernel through drivers tty vt keyboard c has an integer overflow if k ascii is called several times in a row aka cid note members in the community argue that the integer overflow does not lead to a security issue in this case publish date url a href cvss score details base score metrics exploitability metrics attack vector local attack complexity low privileges required low user interaction none scope unchanged impact metrics confidentiality impact high integrity impact high availability impact high for more information on scores click a href suggested fix type upgrade version origin a href release date fix resolution linux libc headers linux yocto gitautoinc step up your open source security game with mend ,0
111209,11726822251.0,IssuesEvent,2020-03-10 15:03:38,primefaces/primefaces,https://api.github.com/repos/primefaces/primefaces,closed,Docu: make sure up-to-date versions are delivered to the client,documentation,"Due to this (old versions) of the documentation are loaded from ""Cache Storage"".


When we update documentation users still have the old versions in their ""Cache Storage"". Only pushing Ctrl + F5 (or cleanup all Browser-Cache) helps.
When we look at https://docsify.js.org/#/?id=docsify itself does not use a serviceworker.
There´s a docsify-issue: https://github.com/docsifyjs/docsify/issues/190
It say´s we should remove the serviceworker because it´s not needed anymore.
",1.0,"Docu: make sure up-to-date versions are delivered to the client - Due to this (old versions) of the documentation are loaded from ""Cache Storage"".


When we update documentation users still have the old versions in their ""Cache Storage"". Only pushing Ctrl + F5 (or cleanup all Browser-Cache) helps.
When we look at https://docsify.js.org/#/?id=docsify itself does not use a serviceworker.
There´s a docsify-issue: https://github.com/docsifyjs/docsify/issues/190
It say´s we should remove the serviceworker because it´s not needed anymore.
",0,docu make sure up to date versions are delivered to the client due to this old versions of the documentation are loaded from cache storage when we update documentation users still have the old versions in their cache storage only pushing ctrl or cleanup all browser cache helps when we look at itself does not use a serviceworker there´s a docsify issue it say´s we should remove the serviceworker because it´s not needed anymore ,0
315512,23583685486.0,IssuesEvent,2022-08-23 09:45:06,ONSdigital/design-system,https://api.github.com/repos/ONSdigital/design-system,opened,Remove links to downloadable resources pattern,Bug Documentation,"We removed the downloadable resources docs so the links to it need to be removed:
- https://ons-design-system.netlify.app/components/document-list/
- (there maybe more)",1.0,"Remove links to downloadable resources pattern - We removed the downloadable resources docs so the links to it need to be removed:
- https://ons-design-system.netlify.app/components/document-list/
- (there maybe more)",0,remove links to downloadable resources pattern we removed the downloadable resources docs so the links to it need to be removed there maybe more ,0
6433,23131650798.0,IssuesEvent,2022-07-28 10:55:37,elastic/apm-pipeline-library,https://api.github.com/repos/elastic/apm-pipeline-library,closed,[filebeat step] Usage is not deterministic,bug question Team:Automation impact:low,"When we use the filebeat step wrapping `dir(BASE_DIR)`, it finds the container log files. But if we move the step inside, it does not.
## Example
Not archiving: https://github.com/elastic/e2e-testing/pull/1330, the step was moved out of the dir(BaseDir)
Archiving: https://github.com/elastic/e2e-testing/pull/1487, restored the location",1.0,"[filebeat step] Usage is not deterministic - When we use the filebeat step wrapping `dir(BASE_DIR)`, it finds the container log files. But if we move the step inside, it does not.
## Example
Not archiving: https://github.com/elastic/e2e-testing/pull/1330, the step was moved out of the dir(BaseDir)
Archiving: https://github.com/elastic/e2e-testing/pull/1487, restored the location",1, usage is not deterministic when we use the filebeat step wrapping dir base dir it finds the container log files but if we move the step inside it does not example not archiving the step was moved out of the dir basedir archiving restored the location,1
4383,16375023129.0,IssuesEvent,2021-05-15 23:16:16,IBM/FHIR,https://api.github.com/repos/IBM/FHIR,closed,Javadocs site doesn't update version number.,automation bug,"**Describe the bug**
Javadocs site doesn't update version number.
https://ibm.github.io/FHIR/javadocs/4.7.1/index.html?overview-summary.html
**Expected behavior**
Should show the version number properly
**Additional context**
Should show the version number properly
",1.0,"Javadocs site doesn't update version number. - **Describe the bug**
Javadocs site doesn't update version number.
https://ibm.github.io/FHIR/javadocs/4.7.1/index.html?overview-summary.html
**Expected behavior**
Should show the version number properly
**Additional context**
Should show the version number properly
",1,javadocs site doesn t update version number describe the bug javadocs site doesn t update version number expected behavior should show the version number properly additional context should show the version number properly ,1
8835,27172311406.0,IssuesEvent,2023-02-17 20:39:47,OneDrive/onedrive-api-docs,https://api.github.com/repos/OneDrive/onedrive-api-docs,closed,Proper way to get item metadata by path while avoiding the url length limit,Needs: Attention :wave: automation:Closed,"#### Category
- [x] Question
- [ ] Documentation issue
- [ ] Bug
I use the `https://${SHAREPOINT_SITE_ID}/_api/v2.0/drives/${DRIVE_ID}/root:@path:?path='${URL_ENCODED_PATH}' ` endpoint to get an item by its path.
When a file name contains unicode characters, it's quite easy for the url to exceed the 2048-character length limit.
For example, the character ""鵝"" is usually encoded to ""%E9%B5%9D"", and a filename consisting of 300 ""鵝"" encodes to an url longer than 2700 characters, so when I try to get the item's metadata the API returns a 401 error with an empty response body.
I did find a way to shorten the url by encoding ""鵝"" as ""%u9D5D"", however there's 3 problems with it:
1. I _think_ this is UTF-16?
2. is this encoding supported for all onedrive apis?
3. the character U+1F4A9 ""💩"" is ""%uD83D%uDCA9"", so a filename with 300 ""💩""s would be more than 300 * 6 * 2 = 3600 characters and still exceed the 2048-character limit
[ ]: http://aka.ms/onedrive-api-issues
[x]: http://aka.ms/onedrive-api-issues",1.0,"Proper way to get item metadata by path while avoiding the url length limit - #### Category
- [x] Question
- [ ] Documentation issue
- [ ] Bug
I use the `https://${SHAREPOINT_SITE_ID}/_api/v2.0/drives/${DRIVE_ID}/root:@path:?path='${URL_ENCODED_PATH}' ` endpoint to get an item by its path.
When a file name contains unicode characters, it's quite easy for the url to exceed the 2048-character length limit.
For example, the character ""鵝"" is usually encoded to ""%E9%B5%9D"", and a filename consisting of 300 ""鵝"" encodes to an url longer than 2700 characters, so when I try to get the item's metadata the API returns a 401 error with an empty response body.
I did find a way to shorten the url by encoding ""鵝"" as ""%u9D5D"", however there's 3 problems with it:
1. I _think_ this is UTF-16?
2. is this encoding supported for all onedrive apis?
3. the character U+1F4A9 ""💩"" is ""%uD83D%uDCA9"", so a filename with 300 ""💩""s would be more than 300 * 6 * 2 = 3600 characters and still exceed the 2048-character limit
[ ]: http://aka.ms/onedrive-api-issues
[x]: http://aka.ms/onedrive-api-issues",1,proper way to get item metadata by path while avoiding the url length limit category question documentation issue bug i use the endpoint to get an item by its path when a file name contains unicode characters it s quite easy for the url to exceed the character length limit for example the character 鵝 is usually encoded to and a filename consisting of 鵝 encodes to an url longer than characters so when i try to get the item s metadata the api returns a error with an empty response body i did find a way to shorten the url by encoding 鵝 as however there s problems with it i think this is utf is this encoding supported for all onedrive apis the character u 💩 is so a filename with 💩 s would be more than characters and still exceed the character limit ,1
2178,11518171552.0,IssuesEvent,2020-02-14 09:58:08,elastic/opbeans-frontend,https://api.github.com/repos/elastic/opbeans-frontend,opened,Allow to disable random errors generator,automation enhancement,"There is an instrumented error that it is generated randomly, to be able to disable it helps to use the Opbeans-front end in a predictable way.
https://github.com/elastic/opbeans-frontend/blob/472f914f5529d64ccf4aad0fc4a76ec27fa0a135/src/components/ProductDetail/index.js#L9",1.0,"Allow to disable random errors generator - There is an instrumented error that it is generated randomly, to be able to disable it helps to use the Opbeans-front end in a predictable way.
https://github.com/elastic/opbeans-frontend/blob/472f914f5529d64ccf4aad0fc4a76ec27fa0a135/src/components/ProductDetail/index.js#L9",1,allow to disable random errors generator there is an instrumented error that it is generated randomly to be able to disable it helps to use the opbeans front end in a predictable way ,1
2142,11459600144.0,IssuesEvent,2020-02-07 07:44:10,apache/druid,https://api.github.com/repos/apache/druid,closed,"Prohibit HashMap(capacity), HashMap(capacity, loadFactor), HashSet, LinkedHashMap constructors",Area - Automation/Static Analysis Contributions Welcome Performance Starter,"They are pretty much always misused. See [this SO answer](https://stackoverflow.com/a/30220944/648955) for the explanation. They should be prohibited using forbidden-apis with suggested alternatives: Guava's `Maps.new(Linked)HashMapWithExpectedSize()`, `Sets.newHashSetWithExpectedSize()`.",1.0,"Prohibit HashMap(capacity), HashMap(capacity, loadFactor), HashSet, LinkedHashMap constructors - They are pretty much always misused. See [this SO answer](https://stackoverflow.com/a/30220944/648955) for the explanation. They should be prohibited using forbidden-apis with suggested alternatives: Guava's `Maps.new(Linked)HashMapWithExpectedSize()`, `Sets.newHashSetWithExpectedSize()`.",1,prohibit hashmap capacity hashmap capacity loadfactor hashset linkedhashmap constructors they are pretty much always misused see for the explanation they should be prohibited using forbidden apis with suggested alternatives guava s maps new linked hashmapwithexpectedsize sets newhashsetwithexpectedsize ,1
9095,27540698294.0,IssuesEvent,2023-03-07 08:20:37,elastic/apm-pipeline-library,https://api.github.com/repos/elastic/apm-pipeline-library,closed,GitHub PR comment with the name of the stage where the step failed,automation ci,"For instance, when running the same step in several parallel stages then the information might not be relevant but duplicated, even though they are totally different steps but from the user experience it might seem a bit of weird

I'd like to add some improvements to provide the stage where the step failed
What do you think?",1.0,"GitHub PR comment with the name of the stage where the step failed - For instance, when running the same step in several parallel stages then the information might not be relevant but duplicated, even though they are totally different steps but from the user experience it might seem a bit of weird

I'd like to add some improvements to provide the stage where the step failed
What do you think?",1,github pr comment with the name of the stage where the step failed for instance when running the same step in several parallel stages then the information might not be relevant but duplicated even though they are totally different steps but from the user experience it might seem a bit of weird i d like to add some improvements to provide the stage where the step failed what do you think ,1
9545,29522343207.0,IssuesEvent,2023-06-05 03:54:59,pingcap/tidb,https://api.github.com/repos/pingcap/tidb,closed,The Plan of TPCH Q3 changes without any data update leading to 13s performance regression,type/enhancement type/performance sig/planner found/automation affects-6.3,"## Bug Report
Please answer these questions before submitting your issue. Thanks!
### 1. Minimal reproduce step (Required)
1. deploy a tidb cluster: 1 tidb (16c) + 3 TiKV(16c) + 1 PD
2. restore tpch 50g data
3. run tpch for 30 mins
### 2. What did you expect to see? (Required)
The plans of all queries would not change and the plan for Q3 should be stuck to d14988835227e68de9bb1194760cee8e.
### 3. What did you see instead (Required)
The plan for Q3 would change in some of the daily runs.

TPCH Q3
```
q3 = `
/*PLACEHOLDER*/ select
l_orderkey,
sum(l_extendedprice * (1 - l_discount)) as revenue,
o_orderdate,
o_shippriority
from
customer,
orders,
lineitem
where
c_mktsegment = 'AUTOMOBILE'
and c_custkey = o_custkey
and l_orderkey = o_orderkey
and o_orderdate < '1995-03-13'
and l_shipdate > '1995-03-13'
group by
l_orderkey,
o_orderdate,
o_shippriority
order by
revenue desc,
o_orderdate
limit 10;
`
```
```
Olap_Detail_Log_ID: 2792562 Plan_Digest: d14988835227e68de9bb1194760cee8e Elapsed_Time (s): 26.5
+--------------------------------------+-------------+----------+-----------+---------------------------------------------------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+----------+---------+
| ID | ESTROWS | ACTROWS | TASK | ACCESS OBJECT | EXECUTION INFO | OPERATOR INFO | MEMORY | DISK |
+--------------------------------------+-------------+----------+-----------+---------------------------------------------------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+----------+---------+
| Projection_14 | 10.00 | 10 | root | | time:26.5s, loops:2, Concurrency:OFF | test.lineitem.l_orderkey, Column#35, test.orders.o_orderdate, test.orders.o_shippriority | 2.52 KB | N/A |
| └─TopN_17 | 10.00 | 10 | root | | time:26.5s, loops:2 | Column#35:desc, test.orders.o_orderdate, offset:0, count:10 | 76.8 KB | N/A |
| └─HashAgg_22 | 39991142.90 | 565763 | root | | time:26.5s, loops:555, partial_worker:{wall_time:26.060572975s, concurrency:5, task_num:1461, tot_wait:2m8.697732643s, tot_exec:1.436736417s, tot_time:2m10.299361175s, max:26.060536947s, p95:26.060536947s}, final_worker:{wall_time:26.522530518s, concurrency:5, task_num:25, tot_wait:2m10.294821085s, tot_exec:2.216177038s, tot_time:2m12.511018685s, max:26.522479673s, p95:26.522479673s} | group by:Column#48, Column#49, Column#50, funcs:sum(Column#44)->Column#35, funcs:firstrow(Column#45)->test.orders.o_orderdate, funcs:firstrow(Column#46)->test.orders.o_shippriority, funcs:firstrow(Column#47)->test.lineitem.l_orderkey | 378.4 MB | N/A |
| └─Projection_82 | 92857210.61 | 1495049 | root | | time:26s, loops:1462, Concurrency:5 | mul(test.lineitem.l_extendedprice, minus(1, test.lineitem.l_discount))->Column#44, test.orders.o_orderdate, test.orders.o_shippriority, test.lineitem.l_orderkey, test.lineitem.l_orderkey, test.orders.o_orderdate, test.orders.o_shippriority | 1.09 MB | N/A |
| └─IndexHashJoin_30 | 92857210.61 | 1495049 | root | | time:26s, loops:1462, inner:{total:2m3.7s, concurrency:5, task:292, construct:8.58s, fetch:1m51.9s, build:1.94s, join:3.19s} | inner join, inner:IndexLookUp_27, outer key:test.orders.o_orderkey, inner key:test.lineitem.l_orderkey, equal cond:eq(test.orders.o_orderkey, test.lineitem.l_orderkey) | 35.5 MB | N/A |
| ├─HashJoin_70(Build) | 22875928.63 | 7274323 | root | | time:6.91s, loops:7108, build_hash_table:{total:891.1ms, fetch:158.5ms, build:732.6ms}, probe:{concurrency:5, total:2m7.2s, max:25.4s, probe:1m53.5s, fetch:13.7s} | inner join, equal:[eq(test.customer.c_custkey, test.orders.o_custkey)] | 141.3 MB | 0 Bytes |
| │ ├─TableReader_76(Build) | 1502320.19 | 1501166 | root | | time:214.1ms, loops:1463, cop_task: {num: 150, max: 199.7ms, min: 1.34ms, avg: 51.1ms, p95: 173.9ms, max_proc_keys: 185477, p95_proc_keys: 183486, tot_proc: 7.46s, tot_wait: 24ms, rpc_num: 150, rpc_time: 7.66s, copr_cache_hit_ratio: 0.00, distsql_concurrency: 15} | data:Selection_75 | 9.18 MB | N/A |
| │ │ └─Selection_75 | 1502320.19 | 1501166 | cop[tikv] | | tikv_task:{proc max:193ms, min:0s, avg: 48.8ms, p80:98ms, p95:168ms, iters:7937, tasks:150}, scan_detail: {total_process_keys: 7500000, total_process_keys_size: 1526085547, total_keys: 7500150, get_snapshot_time: 14.3ms, rocksdb: {key_skipped_count: 7500000, block: {cache_hit_count: 25190}}} | eq(test.customer.c_mktsegment, ""AUTOMOBILE"") | N/A | N/A |
| │ │ └─TableFullScan_74 | 7500000.00 | 7500000 | cop[tikv] | table:customer | tikv_task:{proc max:169ms, min:0s, avg: 43.1ms, p80:87ms, p95:149ms, iters:7937, tasks:150} | keep order:false | N/A | N/A |
| │ └─TableReader_73(Probe) | 36347384.33 | 36374625 | root | | time:2.31s, loops:35402, cop_task: {num: 1615, max: 272.3ms, min: 1.22ms, avg: 63.1ms, p95: 173.8ms, max_proc_keys: 104416, p95_proc_keys: 104416, tot_proc: 1m37.8s, tot_wait: 597ms, rpc_num: 1615, rpc_time: 1m41.9s, copr_cache_hit_ratio: 0.00, distsql_concurrency: 15} | data:Selection_72 | 24.6 MB | N/A |
| │ └─Selection_72 | 36347384.33 | 36374625 | cop[tikv] | | tikv_task:{proc max:234ms, min:0s, avg: 57.5ms, p80:117ms, p95:160ms, iters:79749, tasks:1615}, scan_detail: {total_process_keys: 75000000, total_process_keys_size: 11391895327, total_keys: 75001615, get_snapshot_time: 84.9ms, rocksdb: {key_skipped_count: 75000000, block: {cache_hit_count: 172276, read_count: 20178, read_byte: 337.6 MB, read_time: 146.8ms}}} | lt(test.orders.o_orderdate, 1995-03-13 00:00:00.000000) | N/A | N/A |
| │ └─TableFullScan_71 | 75000000.00 | 75000000 | cop[tikv] | table:orders | tikv_task:{proc max:223ms, min:0s, avg: 54ms, p80:109ms, p95:150ms, iters:79749, tasks:1615} | keep order:false | N/A | N/A |
| └─IndexLookUp_27(Probe) | 4.06 | 1495049 | root | | time:1m45.4s, loops:1958, index_task: {total_time: 1m24.1s, fetch_handle: 1m24.1s, build: 4.73ms, wait: 13.1ms}, table_task: {total_time: 1m51.7s, num: 2583, concurrency: 5} | | 155.7 KB | N/A |
| ├─IndexRangeScan_24(Build) | 7.50 | 29096047 | cop[tikv] | table:lineitem, index:PRIMARY(L_ORDERKEY, L_LINENUMBER) | time:1m21.7s, loops:30459, cop_task: {num: 12290, max: 156.9ms, min: 482.9µs, avg: 21ms, p95: 58.5ms, max_proc_keys: 26443, p95_proc_keys: 9184, tot_proc: 2m58s, tot_wait: 13.5s, rpc_num: 12290, rpc_time: 4m18.1s, copr_cache_hit_ratio: 0.00, distsql_concurrency: 15}, tikv_task:{proc max:135ms, min:0s, avg: 13.6ms, p80:22ms, p95:48ms, iters:74689, tasks:12290}, scan_detail: {total_process_keys: 29096047, total_process_keys_size: 1542090491, total_keys: 36380592, get_snapshot_time: 931ms, rocksdb: {key_skipped_count: 29096047, block: {cache_hit_count: 14364485, read_count: 226140, read_byte: 1023.0 MB, read_time: 1.03s}}} | range: decided by [eq(test.lineitem.l_orderkey, test.orders.o_orderkey)], keep order:false | N/A | N/A |
| └─Selection_26(Probe) | 4.06 | 1495049 | cop[tikv] | | time:1m41.8s, loops:5703, cop_task: {num: 10316, max: 193.4ms, min: 379.4µs, avg: 20.8ms, p95: 63.1ms, max_proc_keys: 20984, p95_proc_keys: 9360, tot_proc: 2m59.9s, tot_wait: 13.6s, rpc_num: 10316, rpc_time: 3m33.9s, copr_cache_hit_ratio: 0.00, distsql_concurrency: 15}, tikv_task:{proc max:186ms, min:0s, avg: 16.5ms, p80:28ms, p95:56ms, iters:73336, tasks:10316}, scan_detail: {total_process_keys: 29096047, total_process_keys_size: 5779934528, total_keys: 34821877, get_snapshot_time: 1.43s, rocksdb: {key_skipped_count: 28246157, block: {cache_hit_count: 12914527, read_count: 59918, read_byte: 1.29 GB, read_time: 697.6ms}}} | gt(test.lineitem.l_shipdate, 1995-03-13 00:00:00.000000) | N/A | N/A |
| └─TableRowIDScan_25 | 7.50 | 29096047 | cop[tikv] | table:lineitem | tikv_task:{proc max:185ms, min:0s, avg: 16.2ms, p80:28ms, p95:55ms, iters:73336, tasks:10316} | keep order:false | N/A | N/A |
+--------------------------------------+-------------+----------+-----------+---------------------------------------------------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+----------+---------+
Olap_Detail_Log_ID: 2792540 Plan_Digest: f8e52347ef089dc357e3ff1704d5415e Elapsed_Time (s): 39.0
+-----------------------------------+--------------+-----------+-----------+----------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+----------+---------+
| ID | ESTROWS | ACTROWS | TASK | ACCESS OBJECT | EXECUTION INFO | OPERATOR INFO | MEMORY | DISK |
+-----------------------------------+--------------+-----------+-----------+----------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+----------+---------+
| Projection_14 | 10.00 | 10 | root | | time:39s, loops:2, Concurrency:OFF | test.lineitem.l_orderkey, Column#35, test.orders.o_orderdate, test.orders.o_shippriority | 2.52 KB | N/A |
| └─TopN_17 | 10.00 | 10 | root | | time:39s, loops:2 | Column#35:desc, test.orders.o_orderdate, offset:0, count:10 | 76.8 KB | N/A |
| └─HashAgg_22 | 39759090.21 | 565763 | root | | time:39s, loops:555, partial_worker:{wall_time:38.571025254s, concurrency:5, task_num:1461, tot_wait:3m11.057657942s, tot_exec:1.598260816s, tot_time:3m12.821384773s, max:38.570971871s, p95:38.570971871s}, final_worker:{wall_time:39.010329063s, concurrency:5, task_num:25, tot_wait:3m12.77921226s, tot_exec:2.193400817s, tot_time:3m14.97263345s, max:39.010246209s, p95:39.010246209s} | group by:Column#48, Column#49, Column#50, funcs:sum(Column#44)->Column#35, funcs:firstrow(Column#45)->test.orders.o_orderdate, funcs:firstrow(Column#46)->test.orders.o_shippriority, funcs:firstrow(Column#47)->test.lineitem.l_orderkey | 378.4 MB | N/A |
| └─Projection_82 | 92857210.61 | 1495049 | root | | time:38.5s, loops:1462, Concurrency:5 | mul(test.lineitem.l_extendedprice, minus(1, test.lineitem.l_discount))->Column#44, test.orders.o_orderdate, test.orders.o_shippriority, test.lineitem.l_orderkey, test.lineitem.l_orderkey, test.orders.o_orderdate, test.orders.o_shippriority | 1.09 MB | N/A |
| └─HashJoin_39 | 92857210.61 | 1495049 | root | | time:38.5s, loops:1462, build_hash_table:{total:9.95s, fetch:5.45s, build:4.49s}, probe:{concurrency:5, total:3m12.6s, max:38.5s, probe:54.8s, fetch:2m17.8s} | inner join, equal:[eq(test.orders.o_orderkey, test.lineitem.l_orderkey)] | 567.5 MB | 0 Bytes |
| ├─HashJoin_70(Build) | 22875928.63 | 7274323 | root | | time:7.69s, loops:7107, build_hash_table:{total:899.1ms, fetch:108ms, build:791.1ms}, probe:{concurrency:5, total:49.7s, max:9.94s, probe:27.5s, fetch:22.3s} | inner join, equal:[eq(test.customer.c_custkey, test.orders.o_custkey)] | 141.3 MB | 0 Bytes |
| │ ├─TableReader_76(Build) | 1502320.19 | 1501166 | root | | time:193ms, loops:1463, cop_task: {num: 150, max: 189ms, min: 805.7µs, avg: 51.9ms, p95: 173.7ms, max_proc_keys: 185477, p95_proc_keys: 183486, tot_proc: 7.57s, tot_wait: 26ms, rpc_num: 150, rpc_time: 7.78s, copr_cache_hit_ratio: 0.00, distsql_concurrency: 15} | data:Selection_75 | 10.5 MB | N/A |
| │ │ └─Selection_75 | 1502320.19 | 1501166 | cop[tikv] | | tikv_task:{proc max:181ms, min:1ms, avg: 49.4ms, p80:117ms, p95:167ms, iters:7937, tasks:150}, scan_detail: {total_process_keys: 7500000, total_process_keys_size: 1526085547, total_keys: 7500150, get_snapshot_time: 15ms, rocksdb: {key_skipped_count: 7500000, block: {cache_hit_count: 25190}}} | eq(test.customer.c_mktsegment, ""AUTOMOBILE"") | N/A | N/A |
| │ │ └─TableFullScan_74 | 7500000.00 | 7500000 | cop[tikv] | table:customer | tikv_task:{proc max:159ms, min:0s, avg: 43.4ms, p80:104ms, p95:148ms, iters:7937, tasks:150} | keep order:false | N/A | N/A |
| │ └─TableReader_73(Probe) | 36347384.33 | 36374625 | root | | time:4s, loops:35403, cop_task: {num: 1615, max: 284.6ms, min: 1.41ms, avg: 69ms, p95: 165.6ms, max_proc_keys: 104416, p95_proc_keys: 104416, tot_proc: 1m47.7s, tot_wait: 168ms, rpc_num: 1615, rpc_time: 1m51.5s, copr_cache_hit_ratio: 0.00, distsql_concurrency: 15} | data:Selection_72 | 13.9 MB | N/A |
| │ └─Selection_72 | 36347384.33 | 36374625 | cop[tikv] | | tikv_task:{proc max:216ms, min:0s, avg: 64.4ms, p80:132ms, p95:157ms, iters:79749, tasks:1615}, scan_detail: {total_process_keys: 75000000, total_process_keys_size: 11391895327, total_keys: 75001615, get_snapshot_time: 104.3ms, rocksdb: {key_skipped_count: 75000000, block: {cache_hit_count: 3121, read_count: 189333, read_byte: 3.09 GB, read_time: 1.28s}}} | lt(test.orders.o_orderdate, 1995-03-13 00:00:00.000000) | N/A | N/A |
| │ └─TableFullScan_71 | 75000000.00 | 75000000 | cop[tikv] | table:orders | tikv_task:{proc max:203ms, min:0s, avg: 61.5ms, p80:126ms, p95:150ms, iters:79749, tasks:1615} | keep order:false | N/A | N/A |
| └─TableReader_79(Probe) | 161388779.98 | 161995407 | root | | time:19.9s, loops:157662, cop_task: {num: 7962, max: 359.2ms, min: 855.3µs, avg: 50.4ms, p95: 136.1ms, max_proc_keys: 95200, p95_proc_keys: 94176, tot_proc: 5m57.4s, tot_wait: 833ms, rpc_num: 7962, rpc_time: 6m40.9s, copr_cache_hit_ratio: 0.00, distsql_concurrency: 15} | data:Selection_78 | 29.2 MB | N/A |
| └─Selection_78 | 161388779.98 | 161995407 | cop[tikv] | | tikv_task:{proc max:173ms, min:0s, avg: 40.4ms, p80:87ms, p95:112ms, iters:324826, tasks:7962}, scan_detail: {total_process_keys: 300005811, total_process_keys_size: 59595430182, total_keys: 300013773, get_snapshot_time: 482.3ms, rocksdb: {key_skipped_count: 300005811, block: {cache_hit_count: 803520, read_count: 185706, read_byte: 2.93 GB, read_time: 1.27s}}} | gt(test.lineitem.l_shipdate, 1995-03-13 00:00:00.000000) | N/A | N/A |
| └─TableFullScan_77 | 300005811.00 | 300005811 | cop[tikv] | table:lineitem | tikv_task:{proc max:163ms, min:0s, avg: 38.2ms, p80:83ms, p95:106ms, iters:324826, tasks:7962} | keep order:false | N/A | N/A |
+-----------------------------------+--------------+-----------+-----------+----------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+----------+---------+
```
### 4. What is your TiDB version? (Required)
nightly
",1.0,"The Plan of TPCH Q3 changes without any data update leading to 13s performance regression - ## Bug Report
Please answer these questions before submitting your issue. Thanks!
### 1. Minimal reproduce step (Required)
1. deploy a tidb cluster: 1 tidb (16c) + 3 TiKV(16c) + 1 PD
2. restore tpch 50g data
3. run tpch for 30 mins
### 2. What did you expect to see? (Required)
The plans of all queries would not change and the plan for Q3 should be stuck to d14988835227e68de9bb1194760cee8e.
### 3. What did you see instead (Required)
The plan for Q3 would change in some of the daily runs.

TPCH Q3
```
q3 = `
/*PLACEHOLDER*/ select
l_orderkey,
sum(l_extendedprice * (1 - l_discount)) as revenue,
o_orderdate,
o_shippriority
from
customer,
orders,
lineitem
where
c_mktsegment = 'AUTOMOBILE'
and c_custkey = o_custkey
and l_orderkey = o_orderkey
and o_orderdate < '1995-03-13'
and l_shipdate > '1995-03-13'
group by
l_orderkey,
o_orderdate,
o_shippriority
order by
revenue desc,
o_orderdate
limit 10;
`
```
```
Olap_Detail_Log_ID: 2792562 Plan_Digest: d14988835227e68de9bb1194760cee8e Elapsed_Time (s): 26.5
+--------------------------------------+-------------+----------+-----------+---------------------------------------------------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+----------+---------+
| ID | ESTROWS | ACTROWS | TASK | ACCESS OBJECT | EXECUTION INFO | OPERATOR INFO | MEMORY | DISK |
+--------------------------------------+-------------+----------+-----------+---------------------------------------------------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+----------+---------+
| Projection_14 | 10.00 | 10 | root | | time:26.5s, loops:2, Concurrency:OFF | test.lineitem.l_orderkey, Column#35, test.orders.o_orderdate, test.orders.o_shippriority | 2.52 KB | N/A |
| └─TopN_17 | 10.00 | 10 | root | | time:26.5s, loops:2 | Column#35:desc, test.orders.o_orderdate, offset:0, count:10 | 76.8 KB | N/A |
| └─HashAgg_22 | 39991142.90 | 565763 | root | | time:26.5s, loops:555, partial_worker:{wall_time:26.060572975s, concurrency:5, task_num:1461, tot_wait:2m8.697732643s, tot_exec:1.436736417s, tot_time:2m10.299361175s, max:26.060536947s, p95:26.060536947s}, final_worker:{wall_time:26.522530518s, concurrency:5, task_num:25, tot_wait:2m10.294821085s, tot_exec:2.216177038s, tot_time:2m12.511018685s, max:26.522479673s, p95:26.522479673s} | group by:Column#48, Column#49, Column#50, funcs:sum(Column#44)->Column#35, funcs:firstrow(Column#45)->test.orders.o_orderdate, funcs:firstrow(Column#46)->test.orders.o_shippriority, funcs:firstrow(Column#47)->test.lineitem.l_orderkey | 378.4 MB | N/A |
| └─Projection_82 | 92857210.61 | 1495049 | root | | time:26s, loops:1462, Concurrency:5 | mul(test.lineitem.l_extendedprice, minus(1, test.lineitem.l_discount))->Column#44, test.orders.o_orderdate, test.orders.o_shippriority, test.lineitem.l_orderkey, test.lineitem.l_orderkey, test.orders.o_orderdate, test.orders.o_shippriority | 1.09 MB | N/A |
| └─IndexHashJoin_30 | 92857210.61 | 1495049 | root | | time:26s, loops:1462, inner:{total:2m3.7s, concurrency:5, task:292, construct:8.58s, fetch:1m51.9s, build:1.94s, join:3.19s} | inner join, inner:IndexLookUp_27, outer key:test.orders.o_orderkey, inner key:test.lineitem.l_orderkey, equal cond:eq(test.orders.o_orderkey, test.lineitem.l_orderkey) | 35.5 MB | N/A |
| ├─HashJoin_70(Build) | 22875928.63 | 7274323 | root | | time:6.91s, loops:7108, build_hash_table:{total:891.1ms, fetch:158.5ms, build:732.6ms}, probe:{concurrency:5, total:2m7.2s, max:25.4s, probe:1m53.5s, fetch:13.7s} | inner join, equal:[eq(test.customer.c_custkey, test.orders.o_custkey)] | 141.3 MB | 0 Bytes |
| │ ├─TableReader_76(Build) | 1502320.19 | 1501166 | root | | time:214.1ms, loops:1463, cop_task: {num: 150, max: 199.7ms, min: 1.34ms, avg: 51.1ms, p95: 173.9ms, max_proc_keys: 185477, p95_proc_keys: 183486, tot_proc: 7.46s, tot_wait: 24ms, rpc_num: 150, rpc_time: 7.66s, copr_cache_hit_ratio: 0.00, distsql_concurrency: 15} | data:Selection_75 | 9.18 MB | N/A |
| │ │ └─Selection_75 | 1502320.19 | 1501166 | cop[tikv] | | tikv_task:{proc max:193ms, min:0s, avg: 48.8ms, p80:98ms, p95:168ms, iters:7937, tasks:150}, scan_detail: {total_process_keys: 7500000, total_process_keys_size: 1526085547, total_keys: 7500150, get_snapshot_time: 14.3ms, rocksdb: {key_skipped_count: 7500000, block: {cache_hit_count: 25190}}} | eq(test.customer.c_mktsegment, ""AUTOMOBILE"") | N/A | N/A |
| │ │ └─TableFullScan_74 | 7500000.00 | 7500000 | cop[tikv] | table:customer | tikv_task:{proc max:169ms, min:0s, avg: 43.1ms, p80:87ms, p95:149ms, iters:7937, tasks:150} | keep order:false | N/A | N/A |
| │ └─TableReader_73(Probe) | 36347384.33 | 36374625 | root | | time:2.31s, loops:35402, cop_task: {num: 1615, max: 272.3ms, min: 1.22ms, avg: 63.1ms, p95: 173.8ms, max_proc_keys: 104416, p95_proc_keys: 104416, tot_proc: 1m37.8s, tot_wait: 597ms, rpc_num: 1615, rpc_time: 1m41.9s, copr_cache_hit_ratio: 0.00, distsql_concurrency: 15} | data:Selection_72 | 24.6 MB | N/A |
| │ └─Selection_72 | 36347384.33 | 36374625 | cop[tikv] | | tikv_task:{proc max:234ms, min:0s, avg: 57.5ms, p80:117ms, p95:160ms, iters:79749, tasks:1615}, scan_detail: {total_process_keys: 75000000, total_process_keys_size: 11391895327, total_keys: 75001615, get_snapshot_time: 84.9ms, rocksdb: {key_skipped_count: 75000000, block: {cache_hit_count: 172276, read_count: 20178, read_byte: 337.6 MB, read_time: 146.8ms}}} | lt(test.orders.o_orderdate, 1995-03-13 00:00:00.000000) | N/A | N/A |
| │ └─TableFullScan_71 | 75000000.00 | 75000000 | cop[tikv] | table:orders | tikv_task:{proc max:223ms, min:0s, avg: 54ms, p80:109ms, p95:150ms, iters:79749, tasks:1615} | keep order:false | N/A | N/A |
| └─IndexLookUp_27(Probe) | 4.06 | 1495049 | root | | time:1m45.4s, loops:1958, index_task: {total_time: 1m24.1s, fetch_handle: 1m24.1s, build: 4.73ms, wait: 13.1ms}, table_task: {total_time: 1m51.7s, num: 2583, concurrency: 5} | | 155.7 KB | N/A |
| ├─IndexRangeScan_24(Build) | 7.50 | 29096047 | cop[tikv] | table:lineitem, index:PRIMARY(L_ORDERKEY, L_LINENUMBER) | time:1m21.7s, loops:30459, cop_task: {num: 12290, max: 156.9ms, min: 482.9µs, avg: 21ms, p95: 58.5ms, max_proc_keys: 26443, p95_proc_keys: 9184, tot_proc: 2m58s, tot_wait: 13.5s, rpc_num: 12290, rpc_time: 4m18.1s, copr_cache_hit_ratio: 0.00, distsql_concurrency: 15}, tikv_task:{proc max:135ms, min:0s, avg: 13.6ms, p80:22ms, p95:48ms, iters:74689, tasks:12290}, scan_detail: {total_process_keys: 29096047, total_process_keys_size: 1542090491, total_keys: 36380592, get_snapshot_time: 931ms, rocksdb: {key_skipped_count: 29096047, block: {cache_hit_count: 14364485, read_count: 226140, read_byte: 1023.0 MB, read_time: 1.03s}}} | range: decided by [eq(test.lineitem.l_orderkey, test.orders.o_orderkey)], keep order:false | N/A | N/A |
| └─Selection_26(Probe) | 4.06 | 1495049 | cop[tikv] | | time:1m41.8s, loops:5703, cop_task: {num: 10316, max: 193.4ms, min: 379.4µs, avg: 20.8ms, p95: 63.1ms, max_proc_keys: 20984, p95_proc_keys: 9360, tot_proc: 2m59.9s, tot_wait: 13.6s, rpc_num: 10316, rpc_time: 3m33.9s, copr_cache_hit_ratio: 0.00, distsql_concurrency: 15}, tikv_task:{proc max:186ms, min:0s, avg: 16.5ms, p80:28ms, p95:56ms, iters:73336, tasks:10316}, scan_detail: {total_process_keys: 29096047, total_process_keys_size: 5779934528, total_keys: 34821877, get_snapshot_time: 1.43s, rocksdb: {key_skipped_count: 28246157, block: {cache_hit_count: 12914527, read_count: 59918, read_byte: 1.29 GB, read_time: 697.6ms}}} | gt(test.lineitem.l_shipdate, 1995-03-13 00:00:00.000000) | N/A | N/A |
| └─TableRowIDScan_25 | 7.50 | 29096047 | cop[tikv] | table:lineitem | tikv_task:{proc max:185ms, min:0s, avg: 16.2ms, p80:28ms, p95:55ms, iters:73336, tasks:10316} | keep order:false | N/A | N/A |
+--------------------------------------+-------------+----------+-----------+---------------------------------------------------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+----------+---------+
Olap_Detail_Log_ID: 2792540 Plan_Digest: f8e52347ef089dc357e3ff1704d5415e Elapsed_Time (s): 39.0
+-----------------------------------+--------------+-----------+-----------+----------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+----------+---------+
| ID | ESTROWS | ACTROWS | TASK | ACCESS OBJECT | EXECUTION INFO | OPERATOR INFO | MEMORY | DISK |
+-----------------------------------+--------------+-----------+-----------+----------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+----------+---------+
| Projection_14 | 10.00 | 10 | root | | time:39s, loops:2, Concurrency:OFF | test.lineitem.l_orderkey, Column#35, test.orders.o_orderdate, test.orders.o_shippriority | 2.52 KB | N/A |
| └─TopN_17 | 10.00 | 10 | root | | time:39s, loops:2 | Column#35:desc, test.orders.o_orderdate, offset:0, count:10 | 76.8 KB | N/A |
| └─HashAgg_22 | 39759090.21 | 565763 | root | | time:39s, loops:555, partial_worker:{wall_time:38.571025254s, concurrency:5, task_num:1461, tot_wait:3m11.057657942s, tot_exec:1.598260816s, tot_time:3m12.821384773s, max:38.570971871s, p95:38.570971871s}, final_worker:{wall_time:39.010329063s, concurrency:5, task_num:25, tot_wait:3m12.77921226s, tot_exec:2.193400817s, tot_time:3m14.97263345s, max:39.010246209s, p95:39.010246209s} | group by:Column#48, Column#49, Column#50, funcs:sum(Column#44)->Column#35, funcs:firstrow(Column#45)->test.orders.o_orderdate, funcs:firstrow(Column#46)->test.orders.o_shippriority, funcs:firstrow(Column#47)->test.lineitem.l_orderkey | 378.4 MB | N/A |
| └─Projection_82 | 92857210.61 | 1495049 | root | | time:38.5s, loops:1462, Concurrency:5 | mul(test.lineitem.l_extendedprice, minus(1, test.lineitem.l_discount))->Column#44, test.orders.o_orderdate, test.orders.o_shippriority, test.lineitem.l_orderkey, test.lineitem.l_orderkey, test.orders.o_orderdate, test.orders.o_shippriority | 1.09 MB | N/A |
| └─HashJoin_39 | 92857210.61 | 1495049 | root | | time:38.5s, loops:1462, build_hash_table:{total:9.95s, fetch:5.45s, build:4.49s}, probe:{concurrency:5, total:3m12.6s, max:38.5s, probe:54.8s, fetch:2m17.8s} | inner join, equal:[eq(test.orders.o_orderkey, test.lineitem.l_orderkey)] | 567.5 MB | 0 Bytes |
| ├─HashJoin_70(Build) | 22875928.63 | 7274323 | root | | time:7.69s, loops:7107, build_hash_table:{total:899.1ms, fetch:108ms, build:791.1ms}, probe:{concurrency:5, total:49.7s, max:9.94s, probe:27.5s, fetch:22.3s} | inner join, equal:[eq(test.customer.c_custkey, test.orders.o_custkey)] | 141.3 MB | 0 Bytes |
| │ ├─TableReader_76(Build) | 1502320.19 | 1501166 | root | | time:193ms, loops:1463, cop_task: {num: 150, max: 189ms, min: 805.7µs, avg: 51.9ms, p95: 173.7ms, max_proc_keys: 185477, p95_proc_keys: 183486, tot_proc: 7.57s, tot_wait: 26ms, rpc_num: 150, rpc_time: 7.78s, copr_cache_hit_ratio: 0.00, distsql_concurrency: 15} | data:Selection_75 | 10.5 MB | N/A |
| │ │ └─Selection_75 | 1502320.19 | 1501166 | cop[tikv] | | tikv_task:{proc max:181ms, min:1ms, avg: 49.4ms, p80:117ms, p95:167ms, iters:7937, tasks:150}, scan_detail: {total_process_keys: 7500000, total_process_keys_size: 1526085547, total_keys: 7500150, get_snapshot_time: 15ms, rocksdb: {key_skipped_count: 7500000, block: {cache_hit_count: 25190}}} | eq(test.customer.c_mktsegment, ""AUTOMOBILE"") | N/A | N/A |
| │ │ └─TableFullScan_74 | 7500000.00 | 7500000 | cop[tikv] | table:customer | tikv_task:{proc max:159ms, min:0s, avg: 43.4ms, p80:104ms, p95:148ms, iters:7937, tasks:150} | keep order:false | N/A | N/A |
| │ └─TableReader_73(Probe) | 36347384.33 | 36374625 | root | | time:4s, loops:35403, cop_task: {num: 1615, max: 284.6ms, min: 1.41ms, avg: 69ms, p95: 165.6ms, max_proc_keys: 104416, p95_proc_keys: 104416, tot_proc: 1m47.7s, tot_wait: 168ms, rpc_num: 1615, rpc_time: 1m51.5s, copr_cache_hit_ratio: 0.00, distsql_concurrency: 15} | data:Selection_72 | 13.9 MB | N/A |
| │ └─Selection_72 | 36347384.33 | 36374625 | cop[tikv] | | tikv_task:{proc max:216ms, min:0s, avg: 64.4ms, p80:132ms, p95:157ms, iters:79749, tasks:1615}, scan_detail: {total_process_keys: 75000000, total_process_keys_size: 11391895327, total_keys: 75001615, get_snapshot_time: 104.3ms, rocksdb: {key_skipped_count: 75000000, block: {cache_hit_count: 3121, read_count: 189333, read_byte: 3.09 GB, read_time: 1.28s}}} | lt(test.orders.o_orderdate, 1995-03-13 00:00:00.000000) | N/A | N/A |
| │ └─TableFullScan_71 | 75000000.00 | 75000000 | cop[tikv] | table:orders | tikv_task:{proc max:203ms, min:0s, avg: 61.5ms, p80:126ms, p95:150ms, iters:79749, tasks:1615} | keep order:false | N/A | N/A |
| └─TableReader_79(Probe) | 161388779.98 | 161995407 | root | | time:19.9s, loops:157662, cop_task: {num: 7962, max: 359.2ms, min: 855.3µs, avg: 50.4ms, p95: 136.1ms, max_proc_keys: 95200, p95_proc_keys: 94176, tot_proc: 5m57.4s, tot_wait: 833ms, rpc_num: 7962, rpc_time: 6m40.9s, copr_cache_hit_ratio: 0.00, distsql_concurrency: 15} | data:Selection_78 | 29.2 MB | N/A |
| └─Selection_78 | 161388779.98 | 161995407 | cop[tikv] | | tikv_task:{proc max:173ms, min:0s, avg: 40.4ms, p80:87ms, p95:112ms, iters:324826, tasks:7962}, scan_detail: {total_process_keys: 300005811, total_process_keys_size: 59595430182, total_keys: 300013773, get_snapshot_time: 482.3ms, rocksdb: {key_skipped_count: 300005811, block: {cache_hit_count: 803520, read_count: 185706, read_byte: 2.93 GB, read_time: 1.27s}}} | gt(test.lineitem.l_shipdate, 1995-03-13 00:00:00.000000) | N/A | N/A |
| └─TableFullScan_77 | 300005811.00 | 300005811 | cop[tikv] | table:lineitem | tikv_task:{proc max:163ms, min:0s, avg: 38.2ms, p80:83ms, p95:106ms, iters:324826, tasks:7962} | keep order:false | N/A | N/A |
+-----------------------------------+--------------+-----------+-----------+----------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+----------+---------+
```
### 4. What is your TiDB version? (Required)
nightly
",1,the plan of tpch changes without any data update leading to performance regression bug report please answer these questions before submitting your issue thanks minimal reproduce step required deploy a tidb cluster tidb tikv pd restore tpch data run tpch for mins what did you expect to see required the plans of all queries would not change and the plan for should be stuck to what did you see instead required the plan for would change in some of the daily runs tpch placeholder select l orderkey sum l extendedprice l discount as revenue o orderdate o shippriority from customer orders lineitem where c mktsegment automobile and c custkey o custkey and l orderkey o orderkey and o orderdate and l shipdate group by l orderkey o orderdate o shippriority order by revenue desc o orderdate limit olap detail log id plan digest elapsed time s id estrows actrows task access object execution info operator info memory disk projection root time loops concurrency off test lineitem l orderkey column test orders o orderdate test orders o shippriority kb n a └─topn root time loops column desc test orders o orderdate offset count kb n a └─hashagg root time loops partial worker wall time concurrency task num tot wait tot exec tot time max final worker wall time concurrency task num tot wait tot exec tot time max group by column column column funcs sum column column funcs firstrow column test orders o orderdate funcs firstrow column test orders o shippriority funcs firstrow column test lineitem l orderkey mb n a └─projection root time loops concurrency mul test lineitem l extendedprice minus test lineitem l discount column test orders o orderdate test orders o shippriority test lineitem l orderkey test lineitem l orderkey test orders o orderdate test orders o shippriority mb n a └─indexhashjoin root time loops inner total concurrency task construct fetch build join inner join inner indexlookup outer key test orders o orderkey inner key test lineitem l orderkey equal cond eq test orders o orderkey test lineitem l orderkey mb n a ├─hashjoin build root time loops build hash table total fetch build probe concurrency total max probe fetch inner join equal mb bytes │ ├─tablereader build root time loops cop task num max min avg max proc keys proc keys tot proc tot wait rpc num rpc time copr cache hit ratio distsql concurrency data selection mb n a │ │ └─selection cop tikv task proc max min avg iters tasks scan detail total process keys total process keys size total keys get snapshot time rocksdb key skipped count block cache hit count eq test customer c mktsegment automobile n a n a │ │ └─tablefullscan cop table customer tikv task proc max min avg iters tasks keep order false n a n a │ └─tablereader probe root time loops cop task num max min avg max proc keys proc keys tot proc tot wait rpc num rpc time copr cache hit ratio distsql concurrency data selection mb n a │ └─selection cop tikv task proc max min avg iters tasks scan detail total process keys total process keys size total keys get snapshot time rocksdb key skipped count block cache hit count read count read byte mb read time lt test orders o orderdate n a n a │ └─tablefullscan cop table orders tikv task proc max min avg iters tasks keep order false n a n a └─indexlookup probe root time loops index task total time fetch handle build wait table task total time num concurrency kb n a ├─indexrangescan build cop table lineitem index primary l orderkey l linenumber time loops cop task num max min avg max proc keys proc keys tot proc tot wait rpc num rpc time copr cache hit ratio distsql concurrency tikv task proc max min avg iters tasks scan detail total process keys total process keys size total keys get snapshot time rocksdb key skipped count block cache hit count read count read byte mb read time range decided by keep order false n a n a └─selection probe cop time loops cop task num max min avg max proc keys proc keys tot proc tot wait rpc num rpc time copr cache hit ratio distsql concurrency tikv task proc max min avg iters tasks scan detail total process keys total process keys size total keys get snapshot time rocksdb key skipped count block cache hit count read count read byte gb read time gt test lineitem l shipdate n a n a └─tablerowidscan cop table lineitem tikv task proc max min avg iters tasks keep order false n a n a olap detail log id plan digest elapsed time s id estrows actrows task access object execution info operator info memory disk projection root time loops concurrency off test lineitem l orderkey column test orders o orderdate test orders o shippriority kb n a └─topn root time loops column desc test orders o orderdate offset count kb n a └─hashagg root time loops partial worker wall time concurrency task num tot wait tot exec tot time max final worker wall time concurrency task num tot wait tot exec tot time max group by column column column funcs sum column column funcs firstrow column test orders o orderdate funcs firstrow column test orders o shippriority funcs firstrow column test lineitem l orderkey mb n a └─projection root time loops concurrency mul test lineitem l extendedprice minus test lineitem l discount column test orders o orderdate test orders o shippriority test lineitem l orderkey test lineitem l orderkey test orders o orderdate test orders o shippriority mb n a └─hashjoin root time loops build hash table total fetch build probe concurrency total max probe fetch inner join equal mb bytes ├─hashjoin build root time loops build hash table total fetch build probe concurrency total max probe fetch inner join equal mb bytes │ ├─tablereader build root time loops cop task num max min avg max proc keys proc keys tot proc tot wait rpc num rpc time copr cache hit ratio distsql concurrency data selection mb n a │ │ └─selection cop tikv task proc max min avg iters tasks scan detail total process keys total process keys size total keys get snapshot time rocksdb key skipped count block cache hit count eq test customer c mktsegment automobile n a n a │ │ └─tablefullscan cop table customer tikv task proc max min avg iters tasks keep order false n a n a │ └─tablereader probe root time loops cop task num max min avg max proc keys proc keys tot proc tot wait rpc num rpc time copr cache hit ratio distsql concurrency data selection mb n a │ └─selection cop tikv task proc max min avg iters tasks scan detail total process keys total process keys size total keys get snapshot time rocksdb key skipped count block cache hit count read count read byte gb read time lt test orders o orderdate n a n a │ └─tablefullscan cop table orders tikv task proc max min avg iters tasks keep order false n a n a └─tablereader probe root time loops cop task num max min avg max proc keys proc keys tot proc tot wait rpc num rpc time copr cache hit ratio distsql concurrency data selection mb n a └─selection cop tikv task proc max min avg iters tasks scan detail total process keys total process keys size total keys get snapshot time rocksdb key skipped count block cache hit count read count read byte gb read time gt test lineitem l shipdate n a n a └─tablefullscan cop table lineitem tikv task proc max min avg iters tasks keep order false n a n a what is your tidb version required nightly ,1
23756,3851867537.0,IssuesEvent,2016-04-06 05:28:51,GPF/imame4all,https://api.github.com/repos/GPF/imame4all,closed,Rom List doesn't work with Touch Overlay disabled using Custom Rom Path,auto-migrated Priority-Medium Type-Defect,"```
What steps will reproduce the problem?
1. Change to a custom rom location
2. Turn OFF Landscape and Portrait overlays
3. Exit MAME.
4. Reload MAME.
5. Touch screen. No rom list appears, only menu.
NOTE: If you go into OPTIONS and turn the overlays BACK ON ... the rom list
will appear -- so it is definitely a glitch.
What is the expected output? What do you see instead?
ROM LIST will appear without enabling overlays.
What version of the product are you using? On what operating system?
This occurs in 1.4.1 and 1.5.x .. any build that accepts custom ROM paths.
This is on an Asus Transformer 3.2 latest build.
Please provide any additional information below.
Fully reproducable. Will troubleshoot, test builds, shoot video - **whatever**
it takes to get this fixed.
```
Original issue reported on code.google.com by `dark...@gmail.com` on 3 Jan 2012 at 4:12",1.0,"Rom List doesn't work with Touch Overlay disabled using Custom Rom Path - ```
What steps will reproduce the problem?
1. Change to a custom rom location
2. Turn OFF Landscape and Portrait overlays
3. Exit MAME.
4. Reload MAME.
5. Touch screen. No rom list appears, only menu.
NOTE: If you go into OPTIONS and turn the overlays BACK ON ... the rom list
will appear -- so it is definitely a glitch.
What is the expected output? What do you see instead?
ROM LIST will appear without enabling overlays.
What version of the product are you using? On what operating system?
This occurs in 1.4.1 and 1.5.x .. any build that accepts custom ROM paths.
This is on an Asus Transformer 3.2 latest build.
Please provide any additional information below.
Fully reproducable. Will troubleshoot, test builds, shoot video - **whatever**
it takes to get this fixed.
```
Original issue reported on code.google.com by `dark...@gmail.com` on 3 Jan 2012 at 4:12",0,rom list doesn t work with touch overlay disabled using custom rom path what steps will reproduce the problem change to a custom rom location turn off landscape and portrait overlays exit mame reload mame touch screen no rom list appears only menu note if you go into options and turn the overlays back on the rom list will appear so it is definitely a glitch what is the expected output what do you see instead rom list will appear without enabling overlays what version of the product are you using on what operating system this occurs in and x any build that accepts custom rom paths this is on an asus transformer latest build please provide any additional information below fully reproducable will troubleshoot test builds shoot video whatever it takes to get this fixed original issue reported on code google com by dark gmail com on jan at ,0
400,6190432579.0,IssuesEvent,2017-07-04 15:22:20,eclipse/smarthome,https://api.github.com/repos/eclipse/smarthome,closed,Correction for ScriptedRuleProvider.xml,Automation,"In the file `ScriptedRuleProvider.xml` the following definition
``
should be replaced with
``
(... false `.internal.shared` in the name definition)",1.0,"Correction for ScriptedRuleProvider.xml - In the file `ScriptedRuleProvider.xml` the following definition
``
should be replaced with
``
(... false `.internal.shared` in the name definition)",1,correction for scriptedruleprovider xml in the file scriptedruleprovider xml the following definition should be replaced with false internal shared in the name definition ,1
7633,25312611366.0,IssuesEvent,2022-11-17 18:43:11,tigerbeetledb/tigerbeetle,https://api.github.com/repos/tigerbeetledb/tigerbeetle,opened,Run sample programs in docker-compose in Github CI,automation,"I got this started [here](https://github.com/tigerbeetledb/tigerbeetle-go/pull/17/files).
Now that we've got a monorepo this can even be simplified a bit more. Instead of running against the latest built Docker image for the TigerBeetle server, it should build against the current commit so we are truly running integration tests.
And it should run for every sample program so that we can ensure our sample programs are correct.
Incidentally this also provides integration tests for the entire database.",1.0,"Run sample programs in docker-compose in Github CI - I got this started [here](https://github.com/tigerbeetledb/tigerbeetle-go/pull/17/files).
Now that we've got a monorepo this can even be simplified a bit more. Instead of running against the latest built Docker image for the TigerBeetle server, it should build against the current commit so we are truly running integration tests.
And it should run for every sample program so that we can ensure our sample programs are correct.
Incidentally this also provides integration tests for the entire database.",1,run sample programs in docker compose in github ci i got this started now that we ve got a monorepo this can even be simplified a bit more instead of running against the latest built docker image for the tigerbeetle server it should build against the current commit so we are truly running integration tests and it should run for every sample program so that we can ensure our sample programs are correct incidentally this also provides integration tests for the entire database ,1
705,3041314556.0,IssuesEvent,2015-08-07 20:32:10,brunobuzzi/OrbeonPersistenceLayer,https://api.github.com/repos/brunobuzzi/OrbeonPersistenceLayer,opened,REST: Orbeon Form Runner Summary,GemStone Service Orbeon Service Call,Implement service for form runner summary (user created applications and forms),2.0,REST: Orbeon Form Runner Summary - Implement service for form runner summary (user created applications and forms),0,rest orbeon form runner summary implement service for form runner summary user created applications and forms ,0
10148,31810023253.0,IssuesEvent,2023-09-13 16:10:49,red-hat-storage/ocs-ci,https://api.github.com/repos/red-hat-storage/ocs-ci,closed,Adjust ui tests to Object Storage ui elements,ui_automation Squad/Black,"
4.14 ODF deployment have changes so we need to adjust PageNavigator and MCG, Object related tests",1.0,"Adjust ui tests to Object Storage ui elements - 
4.14 ODF deployment have changes so we need to adjust PageNavigator and MCG, Object related tests",1,adjust ui tests to object storage ui elements odf deployment have changes so we need to adjust pagenavigator and mcg object related tests,1
394551,11645091782.0,IssuesEvent,2020-02-29 22:44:28,grpc/grpc,https://api.github.com/repos/grpc/grpc,closed,node test failures,disposition/stale kind/bug lang/node priority/P2,"https://source.cloud.google.com/results/invocations/0231cf31-76b9-40fe-91f7-5c474356d328/targets
Looks like there are a few failures here, but I don't understand the output well enough to know how to split them up.",1.0,"node test failures - https://source.cloud.google.com/results/invocations/0231cf31-76b9-40fe-91f7-5c474356d328/targets
Looks like there are a few failures here, but I don't understand the output well enough to know how to split them up.",0,node test failures looks like there are a few failures here but i don t understand the output well enough to know how to split them up ,0
324,5409902957.0,IssuesEvent,2017-03-01 06:33:37,eclipse/smarthome,https://api.github.com/repos/eclipse/smarthome,closed,Automation: Context map should use Object values,Automation,"With respect to the (offtopic) discussion at #3007 I would like to create a new issue.
The values of the elements in the context map should be of type Object (so, a known type) and not `?`.",1.0,"Automation: Context map should use Object values - With respect to the (offtopic) discussion at #3007 I would like to create a new issue.
The values of the elements in the context map should be of type Object (so, a known type) and not `?`.",1,automation context map should use object values with respect to the offtopic discussion at i would like to create a new issue the values of the elements in the context map should be of type object so a known type and not ,1
634410,20360933428.0,IssuesEvent,2022-02-20 17:19:52,ReliaQualAssociates/ramstk,https://api.github.com/repos/ReliaQualAssociates/ramstk,closed,Hardware module allows adding a sibling to the top-level item,type: fix priority: high status: inprogress bump: patch dobranch,"**Describe the bug**
The hardware module allows the creation of a sibling item to the top-level (system) item. There should only be one top-level item for each revision in a program database.
***Expected Behavior***
As a RAMSTK analyst, I want only one system level item per revision so there is only one hardware BoM per revision.
***Actual Behavior***
Pressing the 'Add Sibling' button with the top-level item selected results in the creation of a second top-level item.
**Reproduce**
1. Launch RAMSTK
2. Open a Program database
3. Select the Hardware module
4. Select the top-level item in the Module Book
5. Press the 'Add Sibling' button or select 'Add Sibling' from the pop-up menu
> Steps to reproduce the behavior.
**Logs**
None
**Additional Comments**
The _do_request_insert_sibling() method in the HardwareModuleView() class should check the parent ID of the selected item, raise an information dialog telling the user a sibling can't be added to the top-level item, and then exit without sending the request insert message.
dobranch
priority: high
type: fix",1.0,"Hardware module allows adding a sibling to the top-level item - **Describe the bug**
The hardware module allows the creation of a sibling item to the top-level (system) item. There should only be one top-level item for each revision in a program database.
***Expected Behavior***
As a RAMSTK analyst, I want only one system level item per revision so there is only one hardware BoM per revision.
***Actual Behavior***
Pressing the 'Add Sibling' button with the top-level item selected results in the creation of a second top-level item.
**Reproduce**
1. Launch RAMSTK
2. Open a Program database
3. Select the Hardware module
4. Select the top-level item in the Module Book
5. Press the 'Add Sibling' button or select 'Add Sibling' from the pop-up menu
> Steps to reproduce the behavior.
**Logs**
None
**Additional Comments**
The _do_request_insert_sibling() method in the HardwareModuleView() class should check the parent ID of the selected item, raise an information dialog telling the user a sibling can't be added to the top-level item, and then exit without sending the request insert message.
dobranch
priority: high
type: fix",0,hardware module allows adding a sibling to the top level item describe the bug the hardware module allows the creation of a sibling item to the top level system item there should only be one top level item for each revision in a program database expected behavior as a ramstk analyst i want only one system level item per revision so there is only one hardware bom per revision actual behavior pressing the add sibling button with the top level item selected results in the creation of a second top level item reproduce launch ramstk open a program database select the hardware module select the top level item in the module book press the add sibling button or select add sibling from the pop up menu steps to reproduce the behavior logs none additional comments the do request insert sibling method in the hardwaremoduleview class should check the parent id of the selected item raise an information dialog telling the user a sibling can t be added to the top level item and then exit without sending the request insert message dobranch priority high type fix,0
392176,11584550198.0,IssuesEvent,2020-02-22 18:00:01,ayumi-cloud/oc-security-module,https://api.github.com/repos/ayumi-cloud/oc-security-module,opened,Add multi-level tabs - part of the new ui in October CMS II,Firewall New UI Priority: Medium enhancement in-progress,"### Enhancement idea
- [ ] e.g. Add this to firewall or virus definitions
",1.0,"Add multi-level tabs - part of the new ui in October CMS II - ### Enhancement idea
- [ ] e.g. Add this to firewall or virus definitions
",0,add multi level tabs part of the new ui in october cms ii enhancement idea e g add this to firewall or virus definitions ,0
4360,16165164054.0,IssuesEvent,2021-05-01 10:35:08,davepl/Primes,https://api.github.com/repos/davepl/Primes,closed,Setup CI for this project,automation,"Hello,
It would be nice to set up some CI to run these implementations in a controlled environment periodically. We're getting benchmarks from different machines, and it is hard to keep track of all the numbers for all the different implementations. We need one source of truth.
",1.0,"Setup CI for this project - Hello,
It would be nice to set up some CI to run these implementations in a controlled environment periodically. We're getting benchmarks from different machines, and it is hard to keep track of all the numbers for all the different implementations. We need one source of truth.
",1,setup ci for this project hello it would be nice to set up some ci to run these implementations in a controlled environment periodically we re getting benchmarks from different machines and it is hard to keep track of all the numbers for all the different implementations we need one source of truth ,1
2947,12856722916.0,IssuesEvent,2020-07-09 08:07:45,elastic/apm-integration-testing,https://api.github.com/repos/elastic/apm-integration-testing,opened,Error while loading shared libraries: libXss.so.1,automation bug team:automation,"Today we start to show the following error on RUM test
```
AssertionError: Expected done, got Failed to launch chrome! /rumjs-integration-test/node_modules/puppeteer/.local-chromium/linux-686378/chrome-linux/chrome: error while loading shared libraries: libXss.so.1: cannot open shared object file: No such file or directory TROUBLESHOOTING: https://github.com/GoogleChrome/puppeteer/blob/master/docs/troubleshooting.md
```",2.0,"Error while loading shared libraries: libXss.so.1 - Today we start to show the following error on RUM test
```
AssertionError: Expected done, got Failed to launch chrome! /rumjs-integration-test/node_modules/puppeteer/.local-chromium/linux-686378/chrome-linux/chrome: error while loading shared libraries: libXss.so.1: cannot open shared object file: No such file or directory TROUBLESHOOTING: https://github.com/GoogleChrome/puppeteer/blob/master/docs/troubleshooting.md
```",1,error while loading shared libraries libxss so today we start to show the following error on rum test assertionerror expected done got failed to launch chrome rumjs integration test node modules puppeteer local chromium linux chrome linux chrome error while loading shared libraries libxss so cannot open shared object file no such file or directory troubleshooting ,1
162481,12677681923.0,IssuesEvent,2020-06-19 08:15:03,SymbiFlow/sv-tests,https://api.github.com/repos/SymbiFlow/sv-tests,closed,Add advanced simulation tests,enhancement tests,"Currently we have only a basic set of simulation tests, we should add more tests for the following chapters:
- [x] 16. Assertions PR #821
- [x] 18. Constrained random value generation PR #820
Besides those chapters we should also cover some advanced simulation aspects like:
- [x] UVM Scoreboards PR #836
- [x] Bus functional models #836
While adding those tests we should also try to incorporate some tests utilizing UVM as it is used in real life simulation flows but its current test coverage needs improvement (#560).
Please update this issue with links to PRs/Issues to track status.",1.0,"Add advanced simulation tests - Currently we have only a basic set of simulation tests, we should add more tests for the following chapters:
- [x] 16. Assertions PR #821
- [x] 18. Constrained random value generation PR #820
Besides those chapters we should also cover some advanced simulation aspects like:
- [x] UVM Scoreboards PR #836
- [x] Bus functional models #836
While adding those tests we should also try to incorporate some tests utilizing UVM as it is used in real life simulation flows but its current test coverage needs improvement (#560).
Please update this issue with links to PRs/Issues to track status.",0,add advanced simulation tests currently we have only a basic set of simulation tests we should add more tests for the following chapters assertions pr constrained random value generation pr besides those chapters we should also cover some advanced simulation aspects like uvm scoreboards pr bus functional models while adding those tests we should also try to incorporate some tests utilizing uvm as it is used in real life simulation flows but its current test coverage needs improvement please update this issue with links to prs issues to track status ,0
953,8824294194.0,IssuesEvent,2019-01-02 16:31:43,arcus-azure/arcus.eventgrid.sidecar,https://api.github.com/repos/arcus-azure/arcus.eventgrid.sidecar,closed,Define branch policies,automation management,"Define branch policies on the `master` branch.
It should:
- [x] Build every PR with our CI
- [x] Be approved by 1 person""",1.0,"Define branch policies - Define branch policies on the `master` branch.
It should:
- [x] Build every PR with our CI
- [x] Be approved by 1 person""",1,define branch policies define branch policies on the master branch it should build every pr with our ci be approved by person ,1
5495,19808262754.0,IssuesEvent,2022-01-19 09:27:13,jibebe-jkuat/internship2022,https://api.github.com/repos/jibebe-jkuat/internship2022,reopened,A drawing of the chassis for the robot car ,Automation,AutoCAD 2D drawing will be drafted for 2D printing in the prototyping Lab,1.0,A drawing of the chassis for the robot car - AutoCAD 2D drawing will be drafted for 2D printing in the prototyping Lab,1,a drawing of the chassis for the robot car autocad drawing will be drafted for printing in the prototyping lab,1
740794,25767820986.0,IssuesEvent,2022-12-09 04:24:02,WeMakeDevs/classroom-monitor-bot,https://api.github.com/repos/WeMakeDevs/classroom-monitor-bot,closed,[BUG] Website not deploying,🟧 priority: high 🔒 staff only 💣type: bug,"### Describe the bug
Looks like the website's not deplyoying anymore.
CC: @kaiwalyakoparkar, @siddhant-khisty.
### To Reproduce
_No response_
### Expected Behavior
_No response_
### Screenshot/ Video
_No response_
### Desktop (please complete the following information)
_No response_
### Additional context
_No response_",1.0,"[BUG] Website not deploying - ### Describe the bug
Looks like the website's not deplyoying anymore.
CC: @kaiwalyakoparkar, @siddhant-khisty.
### To Reproduce
_No response_
### Expected Behavior
_No response_
### Screenshot/ Video
_No response_
### Desktop (please complete the following information)
_No response_
### Additional context
_No response_",0, website not deploying describe the bug looks like the website s not deplyoying anymore cc kaiwalyakoparkar siddhant khisty to reproduce no response expected behavior no response screenshot video no response desktop please complete the following information no response additional context no response ,0
324377,23996038571.0,IssuesEvent,2022-09-14 07:38:12,Yun-SeYeong/Bitcoin-Trading-System,https://api.github.com/repos/Yun-SeYeong/Bitcoin-Trading-System,closed,Sync API count 200개 제한 변경,documentation enhancement,현재 Upbit API 특성상 200개를 최대로 조회 할 수 있다. 200개 씩 Batch 형식으로 여러번 나누어 요청 후 조합하여 sync 하도록 변경,1.0,Sync API count 200개 제한 변경 - 현재 Upbit API 특성상 200개를 최대로 조회 할 수 있다. 200개 씩 Batch 형식으로 여러번 나누어 요청 후 조합하여 sync 하도록 변경,0,sync api count 제한 변경 현재 upbit api 특성상 최대로 조회 할 수 있다 씩 batch 형식으로 여러번 나누어 요청 후 조합하여 sync 하도록 변경,0
801123,28454023228.0,IssuesEvent,2023-04-17 05:07:09,magento/magento2,https://api.github.com/repos/magento/magento2,reopened,Read-only app/etc/,Triage: Dev.Experience Priority: P3 Progress: done Issue: ready for confirmation Issue: needs update,"Is there a reason for the `app/etc/` path to be writable during `php bin/magento setup:upgrade --keep-generated`?
Looking into `Magento\Framework\Setup\FilePermissions`, the [getMissingWritableDirectoriesForDbUpgrade](https://github.com/magento/magento2/blob/2.4-develop/lib/internal/Magento/Framework/Setup/FilePermissions.php#L282-L300) asks for `app/etc/` to be writable, but it's not clear what is being written to that folder.
My goal is to deploy magento in a read-only environment (except for the `var/` folder), for an already installed Magento, so theoretically none of those files should be changed compared to what the CI builds.
",1.0,"Read-only app/etc/ - Is there a reason for the `app/etc/` path to be writable during `php bin/magento setup:upgrade --keep-generated`?
Looking into `Magento\Framework\Setup\FilePermissions`, the [getMissingWritableDirectoriesForDbUpgrade](https://github.com/magento/magento2/blob/2.4-develop/lib/internal/Magento/Framework/Setup/FilePermissions.php#L282-L300) asks for `app/etc/` to be writable, but it's not clear what is being written to that folder.
My goal is to deploy magento in a read-only environment (except for the `var/` folder), for an already installed Magento, so theoretically none of those files should be changed compared to what the CI builds.
",0,read only app etc is there a reason for the app etc path to be writable during php bin magento setup upgrade keep generated looking into magento framework setup filepermissions the asks for app etc to be writable but it s not clear what is being written to that folder my goal is to deploy magento in a read only environment except for the var folder for an already installed magento so theoretically none of those files should be changed compared to what the ci builds ,0
2295,11722915642.0,IssuesEvent,2020-03-10 08:00:16,wazuh/wazuh-qa,https://api.github.com/repos/wazuh/wazuh-qa,closed,Syscheck automated tests: Synchronization disabled,automation component/fim,"## Description
Add test that checks that synchronization is disabled when set to disabled in the configuration.
This message must not appear in the log:
```
2020/03/06 11:51:58 ossec-syscheckd[23205] fim_sync.c:56 at fim_run_integrity(): DEBUG: Initializing FIM Integrity Synchronization check. Sync interval is 300 seconds.
```
This is the required configuration for the synchronization:
```xml
no5m1h10
```
",1.0,"Syscheck automated tests: Synchronization disabled - ## Description
Add test that checks that synchronization is disabled when set to disabled in the configuration.
This message must not appear in the log:
```
2020/03/06 11:51:58 ossec-syscheckd[23205] fim_sync.c:56 at fim_run_integrity(): DEBUG: Initializing FIM Integrity Synchronization check. Sync interval is 300 seconds.
```
This is the required configuration for the synchronization:
```xml
no5m1h10
```
",1,syscheck automated tests synchronization disabled description add test that checks that synchronization is disabled when set to disabled in the configuration this message must not appear in the log ossec syscheckd fim sync c at fim run integrity debug initializing fim integrity synchronization check sync interval is seconds this is the required configuration for the synchronization xml no ,1
7552,25110239078.0,IssuesEvent,2022-11-08 19:53:06,o3de/o3de,https://api.github.com/repos/o3de/o3de,closed,Linux/Mac/iOS asset_profile from clean failing,kind/bug sig/platform sig/graphics-audio triage/accepted priority/major kind/automation,"**Describe the bug**
asset_profile job for Linux, Mac, iOS from clean has been failing for over 7 days
**Failed Jenkins Job Information:**
`asset_profile` from the nightly-clean jobs, jobs 55-61
```
[2021-08-06T06:29:00.909Z] AssetProcessor: Processed ""ResourcePools/DefaultConstantBufferPool.resourcepool"" (""server"")...
[2021-08-06T06:29:00.909Z] AssetProcessor: Processed ""LightingPresets/LowContrast/royal_esplanade_2k_iblskyboxcm.exr"" (""pc"")...
[2021-08-06T06:29:00.909Z] AssetProcessor: Failed Shaders/AuxGeom/AuxGeomObjectLit.shadervariantlist, (server)...
[2021-08-06T06:29:00.909Z] AssetProcessor: Failed Shader/ImagePreview.shadervariantlist, (server)...
[2021-08-06T06:29:00.909Z] AssetProcessor: Failed Shaders/SimpleTextured.shadervariantlist, (server)...
[2021-08-06T06:29:00.909Z] AssetProcessor: Failed Shaders/Shadow/DepthExponentiation.shadervariantlist, (server)...
[2021-08-06T06:29:00.909Z] AssetProcessor: Failed Shaders/SimpleTextured.shadervariantlist, (server)...
[2021-08-06T06:29:00.909Z] AssetProcessor: Failed Shader/ImagePreview.shadervariantlist, (server)...
[2021-08-06T06:29:00.909Z] AssetProcessor: Failed Shaders/AuxGeom/AuxGeomObjectLit.shadervariantlist, (server)...
[2021-08-06T06:29:00.909Z] AssetProcessor: Failed Shaders/Shadow/DepthExponentiation.shadervariantlist, (server)...
```",1.0,"Linux/Mac/iOS asset_profile from clean failing - **Describe the bug**
asset_profile job for Linux, Mac, iOS from clean has been failing for over 7 days
**Failed Jenkins Job Information:**
`asset_profile` from the nightly-clean jobs, jobs 55-61
```
[2021-08-06T06:29:00.909Z] AssetProcessor: Processed ""ResourcePools/DefaultConstantBufferPool.resourcepool"" (""server"")...
[2021-08-06T06:29:00.909Z] AssetProcessor: Processed ""LightingPresets/LowContrast/royal_esplanade_2k_iblskyboxcm.exr"" (""pc"")...
[2021-08-06T06:29:00.909Z] AssetProcessor: Failed Shaders/AuxGeom/AuxGeomObjectLit.shadervariantlist, (server)...
[2021-08-06T06:29:00.909Z] AssetProcessor: Failed Shader/ImagePreview.shadervariantlist, (server)...
[2021-08-06T06:29:00.909Z] AssetProcessor: Failed Shaders/SimpleTextured.shadervariantlist, (server)...
[2021-08-06T06:29:00.909Z] AssetProcessor: Failed Shaders/Shadow/DepthExponentiation.shadervariantlist, (server)...
[2021-08-06T06:29:00.909Z] AssetProcessor: Failed Shaders/SimpleTextured.shadervariantlist, (server)...
[2021-08-06T06:29:00.909Z] AssetProcessor: Failed Shader/ImagePreview.shadervariantlist, (server)...
[2021-08-06T06:29:00.909Z] AssetProcessor: Failed Shaders/AuxGeom/AuxGeomObjectLit.shadervariantlist, (server)...
[2021-08-06T06:29:00.909Z] AssetProcessor: Failed Shaders/Shadow/DepthExponentiation.shadervariantlist, (server)...
```",1,linux mac ios asset profile from clean failing describe the bug asset profile job for linux mac ios from clean has been failing for over days failed jenkins job information asset profile from the nightly clean jobs jobs assetprocessor processed resourcepools defaultconstantbufferpool resourcepool server assetprocessor processed lightingpresets lowcontrast royal esplanade iblskyboxcm exr pc assetprocessor failed shaders auxgeom auxgeomobjectlit shadervariantlist server assetprocessor failed shader imagepreview shadervariantlist server assetprocessor failed shaders simpletextured shadervariantlist server assetprocessor failed shaders shadow depthexponentiation shadervariantlist server assetprocessor failed shaders simpletextured shadervariantlist server assetprocessor failed shader imagepreview shadervariantlist server assetprocessor failed shaders auxgeom auxgeomobjectlit shadervariantlist server assetprocessor failed shaders shadow depthexponentiation shadervariantlist server ,1
9555,6384314942.0,IssuesEvent,2017-08-03 04:19:51,upspin/upspin,https://api.github.com/repos/upspin/upspin,closed,"all: snapshots, docs, and easy creation",docs usability,"The Writers file must include the snapshot user explicitly, or else (better) implicitly, because otherwise the user cannot create the directory to store the snapshots.
Whatever the result, the process must be implemented, tested, documented, and added to the signup and/or setup docs.",True,"all: snapshots, docs, and easy creation - The Writers file must include the snapshot user explicitly, or else (better) implicitly, because otherwise the user cannot create the directory to store the snapshots.
Whatever the result, the process must be implemented, tested, documented, and added to the signup and/or setup docs.",0,all snapshots docs and easy creation the writers file must include the snapshot user explicitly or else better implicitly because otherwise the user cannot create the directory to store the snapshots whatever the result the process must be implemented tested documented and added to the signup and or setup docs ,0
9469,28491586962.0,IssuesEvent,2023-04-18 11:37:55,carpentries/amy,https://api.github.com/repos/carpentries/amy,closed,Update automated email triggers to remove supporting instructor,component: email automation,"We are deprecating the role of supporting instructor so any automated email that checks for this role should be updated. The supporting instructor role is no longer required.
",1.0,"Update automated email triggers to remove supporting instructor - We are deprecating the role of supporting instructor so any automated email that checks for this role should be updated. The supporting instructor role is no longer required.
",1,update automated email triggers to remove supporting instructor we are deprecating the role of supporting instructor so any automated email that checks for this role should be updated the supporting instructor role is no longer required ,1
602611,18476366077.0,IssuesEvent,2021-10-18 07:47:13,webcompat/web-bugs,https://api.github.com/repos/webcompat/web-bugs,closed,teams.live.com - site is not usable,browser-firefox priority-critical engine-gecko,"
**URL**: https://teams.live.com
**Browser / Version**: Firefox 93.0
**Operating System**: Windows 10
**Tested Another Browser**: Yes Chrome
**Problem type**: Site is not usable
**Description**: Browser unsupported
**Steps to Reproduce**:
Just won't run on Firefox, forcing to use Edge
Browser Configuration
None
_From [webcompat.com](https://webcompat.com/) with ❤️_",1.0,"teams.live.com - site is not usable -
**URL**: https://teams.live.com
**Browser / Version**: Firefox 93.0
**Operating System**: Windows 10
**Tested Another Browser**: Yes Chrome
**Problem type**: Site is not usable
**Description**: Browser unsupported
**Steps to Reproduce**:
Just won't run on Firefox, forcing to use Edge
Browser Configuration
None
_From [webcompat.com](https://webcompat.com/) with ❤️_",0,teams live com site is not usable url browser version firefox operating system windows tested another browser yes chrome problem type site is not usable description browser unsupported steps to reproduce just won t run on firefox forcing to use edge browser configuration none from with ❤️ ,0
285104,8754809809.0,IssuesEvent,2018-12-14 12:59:03,zephyrproject-rtos/zephyr,https://api.github.com/repos/zephyrproject-rtos/zephyr,closed,"QEMU serial output is not reliable, may affect SLIP and thus network testing",area: Networking area: QEMU bug priority: low,"This ticket provides a (partial) answer of why the issue described in https://github.com/zephyrproject-rtos/zephyr/pull/7831#issuecomment-392067202 happens, specifically:
1. when running samples/net/socket/dumb_http_server sample app on qemu_cortex_m3,
2. running `ab -n1000 http://192.0.2.1:8080/`,
3. processing of requests gets stuck after just few dozens of requests, `ab` eventually times out
4. (ab can be restarted and number of requests can be processed still, i.e. the app keeps running, but requests get stuck soon)
So, it's more or less know issue, but it's not always kept in mind: UART emulation in QEMU is sub-ideal, and there can be problems with serial communication, which is used by SLIP and loop-slip-tap.sh. This is what happens here.
For example, SLIP driver logging:
~~~
[slip] [INF] slip_send: sent: pkt 0x20001ec4 llr: 14, len: 54
[slip] [INF] slip_send: sent: pkt 0x20001ec4 llr: 14, len: 1506
[slip] [INF] slip_send: sent: pkt 0x20001e78 llr: 14, len: 783
[slip] [INF] slip_send: sent: pkt 0x20001e2c llr: 14, len: 54
Connection from 192.0.2.2 closed
[slip] [INF] slip_send: sent: pkt 0x20001e78 llr: 14, len: 783
~~~
What we can see here is that pkt 0x20001e78 was transmitted twice. But here's what Wireshark sees:

As can be seen, instead of first 783 bytes packet it receives broken 275 bytes packet, which gets ignored by host. That's what causes retransmission, and next time the packet gets thru.
",1.0,"QEMU serial output is not reliable, may affect SLIP and thus network testing - This ticket provides a (partial) answer of why the issue described in https://github.com/zephyrproject-rtos/zephyr/pull/7831#issuecomment-392067202 happens, specifically:
1. when running samples/net/socket/dumb_http_server sample app on qemu_cortex_m3,
2. running `ab -n1000 http://192.0.2.1:8080/`,
3. processing of requests gets stuck after just few dozens of requests, `ab` eventually times out
4. (ab can be restarted and number of requests can be processed still, i.e. the app keeps running, but requests get stuck soon)
So, it's more or less know issue, but it's not always kept in mind: UART emulation in QEMU is sub-ideal, and there can be problems with serial communication, which is used by SLIP and loop-slip-tap.sh. This is what happens here.
For example, SLIP driver logging:
~~~
[slip] [INF] slip_send: sent: pkt 0x20001ec4 llr: 14, len: 54
[slip] [INF] slip_send: sent: pkt 0x20001ec4 llr: 14, len: 1506
[slip] [INF] slip_send: sent: pkt 0x20001e78 llr: 14, len: 783
[slip] [INF] slip_send: sent: pkt 0x20001e2c llr: 14, len: 54
Connection from 192.0.2.2 closed
[slip] [INF] slip_send: sent: pkt 0x20001e78 llr: 14, len: 783
~~~
What we can see here is that pkt 0x20001e78 was transmitted twice. But here's what Wireshark sees:

As can be seen, instead of first 783 bytes packet it receives broken 275 bytes packet, which gets ignored by host. That's what causes retransmission, and next time the packet gets thru.
",0,qemu serial output is not reliable may affect slip and thus network testing this ticket provides a partial answer of why the issue described in happens specifically when running samples net socket dumb http server sample app on qemu cortex running ab processing of requests gets stuck after just few dozens of requests ab eventually times out ab can be restarted and number of requests can be processed still i e the app keeps running but requests get stuck soon so it s more or less know issue but it s not always kept in mind uart emulation in qemu is sub ideal and there can be problems with serial communication which is used by slip and loop slip tap sh this is what happens here for example slip driver logging slip send sent pkt llr len slip send sent pkt llr len slip send sent pkt llr len slip send sent pkt llr len connection from closed slip send sent pkt llr len what we can see here is that pkt was transmitted twice but here s what wireshark sees as can be seen instead of first bytes packet it receives broken bytes packet which gets ignored by host that s what causes retransmission and next time the packet gets thru ,0
19070,3133303800.0,IssuesEvent,2015-09-10 00:18:58,beefproject/beef,https://api.github.com/repos/beefproject/beef,closed,`open_udp_socket': no datagram socket (RuntimeError),Defect,"could you please help me on this? this is the error that I get when i try to run beef in kali linux 2.0:
""root@ss:/usr/share/beef-xss# ./beef
[18:50:30][*] Bind socket [imapeudora1] listening on [0.0.0.0:2000].
[18:50:30][*] Browser Exploitation Framework (BeEF) 0.4.6.1-alpha
[18:50:30] | Twit: @beefproject
[18:50:30] | Site: http://beefproject.com
[18:50:30] | Blog: http://blog.beefproject.com
[18:50:30] |_ Wiki: https://github.com/beefproject/beef/wiki
[18:50:30][*] Project Creator: Wade Alcorn (@WadeAlcorn)
[18:50:30][*] BeEF is loading. Wait a few seconds...
[18:50:33][*] 12 extensions enabled.
[18:50:33][*] 241 modules enabled.
[18:50:33][*] 2 network interfaces were detected.
[18:50:33][+] running on network interface: 127.0.0.1
[18:50:33] | Hook URL: http://127.0.0.1:3000/hook.js
[18:50:33] |_ UI URL: http://127.0.0.1:3000/ui/panel
[18:50:33][+] running on network interface: 192.168.0.10
[18:50:33] | Hook URL: http://192.168.0.10:3000/hook.js
[18:50:33] |_ UI URL: http://192.168.0.10:3000/ui/panel
[18:50:33][*] RESTful API key: a3ed2a9e5386081c6cd57842fccd93f1af55f60e
[18:50:33][*] DNS Server: 127.0.0.1:5300 (udp)
[18:50:33] | Upstream Server: 8.8.8.8:53 (udp)
[18:50:33] |_ Upstream Server: 8.8.8.8:53 (tcp)
[18:50:33][*] HTTP Proxy: http://127.0.0.1:6789
[18:50:33][*] BeEF server started (press control+c to stop)
/usr/lib/ruby/vendor_ruby/eventmachine.rb:859:in `open_udp_socket': no datagram socket (RuntimeError)
from /usr/lib/ruby/vendor_ruby/eventmachine.rb:859:in `open_datagram_socket'
from /usr/lib/ruby/vendor_ruby/rubydns/server.rb:122:in `block in run'
from /usr/lib/ruby/vendor_ruby/rubydns/server.rb:119:in `each'
from /usr/lib/ruby/vendor_ruby/rubydns/server.rb:119:in `run'
from /usr/share/beef-xss/extensions/dns/dns.rb:127:in `block (3 levels) in run'
from /usr/lib/ruby/vendor_ruby/eventmachine.rb:959:in `call'
from /usr/lib/ruby/vendor_ruby/eventmachine.rb:959:in `block in run_deferred_callbacks'
from /usr/lib/ruby/vendor_ruby/eventmachine.rb:956:in `times'
from /usr/lib/ruby/vendor_ruby/eventmachine.rb:956:in `run_deferred_callbacks'
from /usr/lib/ruby/vendor_ruby/eventmachine.rb:187:in `run_machine'
from /usr/lib/ruby/vendor_ruby/eventmachine.rb:187:in `run'
from /usr/lib/ruby/vendor_ruby/thin/backends/base.rb:61:in `start'
from /usr/lib/ruby/vendor_ruby/thin/server.rb:159:in `start'
from /usr/share/beef-xss/core/main/server.rb:127:in `start'
from ./beef:145:in `' ""
I`ve tried to reinstall from repos, git and reinstalled all needed gems and tried with different versions of ruby but at the end I still get this.",1.0,"`open_udp_socket': no datagram socket (RuntimeError) - could you please help me on this? this is the error that I get when i try to run beef in kali linux 2.0:
""root@ss:/usr/share/beef-xss# ./beef
[18:50:30][*] Bind socket [imapeudora1] listening on [0.0.0.0:2000].
[18:50:30][*] Browser Exploitation Framework (BeEF) 0.4.6.1-alpha
[18:50:30] | Twit: @beefproject
[18:50:30] | Site: http://beefproject.com
[18:50:30] | Blog: http://blog.beefproject.com
[18:50:30] |_ Wiki: https://github.com/beefproject/beef/wiki
[18:50:30][*] Project Creator: Wade Alcorn (@WadeAlcorn)
[18:50:30][*] BeEF is loading. Wait a few seconds...
[18:50:33][*] 12 extensions enabled.
[18:50:33][*] 241 modules enabled.
[18:50:33][*] 2 network interfaces were detected.
[18:50:33][+] running on network interface: 127.0.0.1
[18:50:33] | Hook URL: http://127.0.0.1:3000/hook.js
[18:50:33] |_ UI URL: http://127.0.0.1:3000/ui/panel
[18:50:33][+] running on network interface: 192.168.0.10
[18:50:33] | Hook URL: http://192.168.0.10:3000/hook.js
[18:50:33] |_ UI URL: http://192.168.0.10:3000/ui/panel
[18:50:33][*] RESTful API key: a3ed2a9e5386081c6cd57842fccd93f1af55f60e
[18:50:33][*] DNS Server: 127.0.0.1:5300 (udp)
[18:50:33] | Upstream Server: 8.8.8.8:53 (udp)
[18:50:33] |_ Upstream Server: 8.8.8.8:53 (tcp)
[18:50:33][*] HTTP Proxy: http://127.0.0.1:6789
[18:50:33][*] BeEF server started (press control+c to stop)
/usr/lib/ruby/vendor_ruby/eventmachine.rb:859:in `open_udp_socket': no datagram socket (RuntimeError)
from /usr/lib/ruby/vendor_ruby/eventmachine.rb:859:in `open_datagram_socket'
from /usr/lib/ruby/vendor_ruby/rubydns/server.rb:122:in `block in run'
from /usr/lib/ruby/vendor_ruby/rubydns/server.rb:119:in `each'
from /usr/lib/ruby/vendor_ruby/rubydns/server.rb:119:in `run'
from /usr/share/beef-xss/extensions/dns/dns.rb:127:in `block (3 levels) in run'
from /usr/lib/ruby/vendor_ruby/eventmachine.rb:959:in `call'
from /usr/lib/ruby/vendor_ruby/eventmachine.rb:959:in `block in run_deferred_callbacks'
from /usr/lib/ruby/vendor_ruby/eventmachine.rb:956:in `times'
from /usr/lib/ruby/vendor_ruby/eventmachine.rb:956:in `run_deferred_callbacks'
from /usr/lib/ruby/vendor_ruby/eventmachine.rb:187:in `run_machine'
from /usr/lib/ruby/vendor_ruby/eventmachine.rb:187:in `run'
from /usr/lib/ruby/vendor_ruby/thin/backends/base.rb:61:in `start'
from /usr/lib/ruby/vendor_ruby/thin/server.rb:159:in `start'
from /usr/share/beef-xss/core/main/server.rb:127:in `start'
from ./beef:145:in `' ""
I`ve tried to reinstall from repos, git and reinstalled all needed gems and tried with different versions of ruby but at the end I still get this.",0, open udp socket no datagram socket runtimeerror could you please help me on this this is the error that i get when i try to run beef in kali linux root ss usr share beef xss beef bind socket listening on browser exploitation framework beef alpha twit beefproject site blog wiki project creator wade alcorn wadealcorn beef is loading wait a few seconds extensions enabled modules enabled network interfaces were detected running on network interface hook url ui url running on network interface hook url ui url restful api key dns server udp upstream server udp upstream server tcp http proxy beef server started press control c to stop usr lib ruby vendor ruby eventmachine rb in open udp socket no datagram socket runtimeerror from usr lib ruby vendor ruby eventmachine rb in open datagram socket from usr lib ruby vendor ruby rubydns server rb in block in run from usr lib ruby vendor ruby rubydns server rb in each from usr lib ruby vendor ruby rubydns server rb in run from usr share beef xss extensions dns dns rb in block levels in run from usr lib ruby vendor ruby eventmachine rb in call from usr lib ruby vendor ruby eventmachine rb in block in run deferred callbacks from usr lib ruby vendor ruby eventmachine rb in times from usr lib ruby vendor ruby eventmachine rb in run deferred callbacks from usr lib ruby vendor ruby eventmachine rb in run machine from usr lib ruby vendor ruby eventmachine rb in run from usr lib ruby vendor ruby thin backends base rb in start from usr lib ruby vendor ruby thin server rb in start from usr share beef xss core main server rb in start from beef in i ve tried to reinstall from repos git and reinstalled all needed gems and tried with different versions of ruby but at the end i still get this ,0
40217,2867572955.0,IssuesEvent,2015-06-05 14:07:07,Araq/Nim,https://api.github.com/repos/Araq/Nim,closed,nimsuggest should be fixed to work in separate repo,High Priority Tools,Nimsuggest was separated to its own repo and lives at http://github.com/nim-lang/nimsuggest. The only problem is that installing it via nimble fails. This should be fixed and the nimsuggest in this repo should stop diverging from the separated version.,1.0,nimsuggest should be fixed to work in separate repo - Nimsuggest was separated to its own repo and lives at http://github.com/nim-lang/nimsuggest. The only problem is that installing it via nimble fails. This should be fixed and the nimsuggest in this repo should stop diverging from the separated version.,0,nimsuggest should be fixed to work in separate repo nimsuggest was separated to its own repo and lives at the only problem is that installing it via nimble fails this should be fixed and the nimsuggest in this repo should stop diverging from the separated version ,0
221,4786941529.0,IssuesEvent,2016-10-29 18:22:12,rancher/rancher,https://api.github.com/repos/rancher/rancher,opened,"LB stuck in ""Reinitilaizing"" state when the environment is deactivated and ",kind/bug setup/automation,"Server version - Build from master.
Steps to reproduce the problem:
Create an environment with services and LB service .
Deactivate environment.
Activate environment.
LB container is stuck in ""Reinitilaizing"" state forever.
ha proxy logs:
```10/29/2016 9:27:42 AMtime=""2016-10-29T16:27:42Z"" level=info msg=""KUBERNETES_URL is not set, skipping init of kubernetes controller""
10/29/2016 9:27:43 AMtime=""2016-10-29T16:27:43Z"" level=info msg=""Starting Rancher LB service""
10/29/2016 9:27:43 AMtime=""2016-10-29T16:27:43Z"" level=info msg=""LB controller: rancher""
10/29/2016 9:27:43 AMtime=""2016-10-29T16:27:43Z"" level=info msg=""LB provider: haproxy""
10/29/2016 9:27:43 AMtime=""2016-10-29T16:27:43Z"" level=info msg=""starting rancher controller""
10/29/2016 9:27:43 AMtime=""2016-10-29T16:27:43Z"" level=info msg=""Healthcheck handler is listening on :10241""
10/29/2016 9:27:43 AMtime=""2016-10-29T16:27:43Z"" level=info msg=""Syncing up LB""
10/29/2016 9:27:43 AMtime=""2016-10-29T16:27:43Z"" level=info msg="" -- staring haproxy\n * Starting haproxy haproxy\n ...done.\n""
10/29/2016 9:27:44 AMtime=""2016-10-29T16:27:44Z"" level=info msg="" -- reloading haproxy config with the new config changes\n[WARNING] 302/162744 (43) : config : 'option forwardfor' ignored for proxy 'default' as it requires HTTP mode.\n""
10/29/2016 9:27:48 AMtime=""2016-10-29T16:27:48Z"" level=info msg=""Syncing up LB""
10/29/2016 9:27:48 AMtime=""2016-10-29T16:27:48Z"" level=info msg="" -- no changes in haproxy config\n""
10/29/2016 9:27:53 AMtime=""2016-10-29T16:27:53Z"" level=info msg=""Syncing up LB""
10/29/2016 9:27:53 AMtime=""2016-10-29T16:27:53Z"" level=info msg="" -- no changes in haproxy config\n""
10/29/2016 9:28:04 AMtime=""2016-10-29T16:28:04Z"" level=info msg=""Received SIGTERM, shutting down""
10/29/2016 9:28:04 AMtime=""2016-10-29T16:28:04Z"" level=info msg=""Shutting down rancher controller""
10/29/2016 9:28:04 AMtime=""2016-10-29T16:28:04Z"" level=info msg=""Shutting down provider haproxy""
10/29/2016 9:28:04 AMtime=""2016-10-29T16:28:04Z"" level=info msg=""Error during shutdown shutdown already in progress""
10/29/2016 9:28:04 AMtime=""2016-10-29T16:28:04Z"" level=info msg=""Exiting with 1""
10/29/2016 9:28:15 AMtime=""2016-10-29T16:28:15Z"" level=info msg=""KUBERNETES_URL is not set, skipping init of kubernetes controller""
10/29/2016 9:28:15 AMtime=""2016-10-29T16:28:15Z"" level=info msg=""Starting Rancher LB service""
10/29/2016 9:28:15 AMtime=""2016-10-29T16:28:15Z"" level=info msg=""LB controller: rancher""
10/29/2016 9:28:15 AMtime=""2016-10-29T16:28:15Z"" level=info msg=""LB provider: haproxy""
10/29/2016 9:28:15 AMtime=""2016-10-29T16:28:15Z"" level=info msg=""starting rancher controller""
10/29/2016 9:28:15 AMtime=""2016-10-29T16:28:15Z"" level=info msg=""Healthcheck handler is listening on :10241""
10/29/2016 9:28:16 AMtime=""2016-10-29T16:28:16Z"" level=info msg=""Syncing up LB""
10/29/2016 9:28:21 AMtime=""2016-10-29T16:28:21Z"" level=error msg=""Failed to apply lb config on provider: Failed to wait for haproxy to exit init stage""
10/29/2016 9:28:21 AMtime=""2016-10-29T16:28:21Z"" level=info msg=""Syncing up LB""
10/29/2016 9:28:26 AMtime=""2016-10-29T16:28:26Z"" level=error msg=""Failed to apply lb config on provider: Failed to wait for haproxy to exit init stage""
10/29/2016 9:28:26 AMtime=""2016-10-29T16:28:26Z"" level=info msg=""Syncing up LB""
10/29/2016 9:28:31 AMtime=""2016-10-29T16:28:31Z"" level=error msg=""Failed to apply lb config on provider: Failed to wait for haproxy to exit init stage""
10/29/2016 9:28:36 AMtime=""2016-10-29T16:28:36Z"" level=info msg=""Syncing up LB""
10/29/2016 9:28:41 AMtime=""2016-10-29T16:28:41Z"" level=error msg=""Failed to apply lb config on provider: Failed to wait for haproxy to exit init stage""
10/29/2016 9:28:46 AMtime=""2016-10-29T16:28:46Z"" level=info msg=""Syncing up LB""
10/29/2016 9:28:51 AMtime=""2016-10-29T16:28:51Z"" level=error msg=""Failed to apply lb config on provider: Failed to wait for haproxy to exit init stage""
10/29/2016 9:29:01 AMtime=""2016-10-29T16:29:01Z"" level=info msg=""Syncing up LB""
10/29/2016 9:29:06 AMtime=""2016-10-29T16:29:06Z"" level=error msg=""Failed to apply lb config on provider: Failed to wait for haproxy to exit init stage""
10/29/2016 9:29:11 AMtime=""2016-10-29T16:29:11Z"" level=info msg=""Syncing up LB""
10/29/2016 9:29:15 AMtime=""2016-10-29T16:29:15Z"" level=info msg="" -- staring haproxy\nPidfile (and pid) already exist.\n""
10/29/2016 9:29:16 AMtime=""2016-10-29T16:29:16Z"" level=error msg=""Failed to apply lb config on provider: Failed to wait for haproxy to exit init stage""
10/29/2016 9:29:16 AMtime=""2016-10-29T16:29:16Z"" level=info msg=""Syncing up LB""
10/29/2016 9:29:16 AMtime=""2016-10-29T16:29:16Z"" level=info msg="" -- no changes in haproxy config\n""
10/29/2016 9:29:36 AMtime=""2016-10-29T16:29:36Z"" level=info msg=""Syncing up LB""
10/29/2016 9:29:36 AMtime=""2016-10-29T16:29:36Z"" level=info msg="" -- no changes in haproxy config\n""
10/29/2016 9:29:51 AMtime=""2016-10-29T16:29:51Z"" level=info msg=""Syncing up LB""
10/29/2016 9:29:51 AMtime=""2016-10-29T16:29:51Z"" level=info msg="" -- no changes in haproxy config\n""
```",1.0,"LB stuck in ""Reinitilaizing"" state when the environment is deactivated and - Server version - Build from master.
Steps to reproduce the problem:
Create an environment with services and LB service .
Deactivate environment.
Activate environment.
LB container is stuck in ""Reinitilaizing"" state forever.
ha proxy logs:
```10/29/2016 9:27:42 AMtime=""2016-10-29T16:27:42Z"" level=info msg=""KUBERNETES_URL is not set, skipping init of kubernetes controller""
10/29/2016 9:27:43 AMtime=""2016-10-29T16:27:43Z"" level=info msg=""Starting Rancher LB service""
10/29/2016 9:27:43 AMtime=""2016-10-29T16:27:43Z"" level=info msg=""LB controller: rancher""
10/29/2016 9:27:43 AMtime=""2016-10-29T16:27:43Z"" level=info msg=""LB provider: haproxy""
10/29/2016 9:27:43 AMtime=""2016-10-29T16:27:43Z"" level=info msg=""starting rancher controller""
10/29/2016 9:27:43 AMtime=""2016-10-29T16:27:43Z"" level=info msg=""Healthcheck handler is listening on :10241""
10/29/2016 9:27:43 AMtime=""2016-10-29T16:27:43Z"" level=info msg=""Syncing up LB""
10/29/2016 9:27:43 AMtime=""2016-10-29T16:27:43Z"" level=info msg="" -- staring haproxy\n * Starting haproxy haproxy\n ...done.\n""
10/29/2016 9:27:44 AMtime=""2016-10-29T16:27:44Z"" level=info msg="" -- reloading haproxy config with the new config changes\n[WARNING] 302/162744 (43) : config : 'option forwardfor' ignored for proxy 'default' as it requires HTTP mode.\n""
10/29/2016 9:27:48 AMtime=""2016-10-29T16:27:48Z"" level=info msg=""Syncing up LB""
10/29/2016 9:27:48 AMtime=""2016-10-29T16:27:48Z"" level=info msg="" -- no changes in haproxy config\n""
10/29/2016 9:27:53 AMtime=""2016-10-29T16:27:53Z"" level=info msg=""Syncing up LB""
10/29/2016 9:27:53 AMtime=""2016-10-29T16:27:53Z"" level=info msg="" -- no changes in haproxy config\n""
10/29/2016 9:28:04 AMtime=""2016-10-29T16:28:04Z"" level=info msg=""Received SIGTERM, shutting down""
10/29/2016 9:28:04 AMtime=""2016-10-29T16:28:04Z"" level=info msg=""Shutting down rancher controller""
10/29/2016 9:28:04 AMtime=""2016-10-29T16:28:04Z"" level=info msg=""Shutting down provider haproxy""
10/29/2016 9:28:04 AMtime=""2016-10-29T16:28:04Z"" level=info msg=""Error during shutdown shutdown already in progress""
10/29/2016 9:28:04 AMtime=""2016-10-29T16:28:04Z"" level=info msg=""Exiting with 1""
10/29/2016 9:28:15 AMtime=""2016-10-29T16:28:15Z"" level=info msg=""KUBERNETES_URL is not set, skipping init of kubernetes controller""
10/29/2016 9:28:15 AMtime=""2016-10-29T16:28:15Z"" level=info msg=""Starting Rancher LB service""
10/29/2016 9:28:15 AMtime=""2016-10-29T16:28:15Z"" level=info msg=""LB controller: rancher""
10/29/2016 9:28:15 AMtime=""2016-10-29T16:28:15Z"" level=info msg=""LB provider: haproxy""
10/29/2016 9:28:15 AMtime=""2016-10-29T16:28:15Z"" level=info msg=""starting rancher controller""
10/29/2016 9:28:15 AMtime=""2016-10-29T16:28:15Z"" level=info msg=""Healthcheck handler is listening on :10241""
10/29/2016 9:28:16 AMtime=""2016-10-29T16:28:16Z"" level=info msg=""Syncing up LB""
10/29/2016 9:28:21 AMtime=""2016-10-29T16:28:21Z"" level=error msg=""Failed to apply lb config on provider: Failed to wait for haproxy to exit init stage""
10/29/2016 9:28:21 AMtime=""2016-10-29T16:28:21Z"" level=info msg=""Syncing up LB""
10/29/2016 9:28:26 AMtime=""2016-10-29T16:28:26Z"" level=error msg=""Failed to apply lb config on provider: Failed to wait for haproxy to exit init stage""
10/29/2016 9:28:26 AMtime=""2016-10-29T16:28:26Z"" level=info msg=""Syncing up LB""
10/29/2016 9:28:31 AMtime=""2016-10-29T16:28:31Z"" level=error msg=""Failed to apply lb config on provider: Failed to wait for haproxy to exit init stage""
10/29/2016 9:28:36 AMtime=""2016-10-29T16:28:36Z"" level=info msg=""Syncing up LB""
10/29/2016 9:28:41 AMtime=""2016-10-29T16:28:41Z"" level=error msg=""Failed to apply lb config on provider: Failed to wait for haproxy to exit init stage""
10/29/2016 9:28:46 AMtime=""2016-10-29T16:28:46Z"" level=info msg=""Syncing up LB""
10/29/2016 9:28:51 AMtime=""2016-10-29T16:28:51Z"" level=error msg=""Failed to apply lb config on provider: Failed to wait for haproxy to exit init stage""
10/29/2016 9:29:01 AMtime=""2016-10-29T16:29:01Z"" level=info msg=""Syncing up LB""
10/29/2016 9:29:06 AMtime=""2016-10-29T16:29:06Z"" level=error msg=""Failed to apply lb config on provider: Failed to wait for haproxy to exit init stage""
10/29/2016 9:29:11 AMtime=""2016-10-29T16:29:11Z"" level=info msg=""Syncing up LB""
10/29/2016 9:29:15 AMtime=""2016-10-29T16:29:15Z"" level=info msg="" -- staring haproxy\nPidfile (and pid) already exist.\n""
10/29/2016 9:29:16 AMtime=""2016-10-29T16:29:16Z"" level=error msg=""Failed to apply lb config on provider: Failed to wait for haproxy to exit init stage""
10/29/2016 9:29:16 AMtime=""2016-10-29T16:29:16Z"" level=info msg=""Syncing up LB""
10/29/2016 9:29:16 AMtime=""2016-10-29T16:29:16Z"" level=info msg="" -- no changes in haproxy config\n""
10/29/2016 9:29:36 AMtime=""2016-10-29T16:29:36Z"" level=info msg=""Syncing up LB""
10/29/2016 9:29:36 AMtime=""2016-10-29T16:29:36Z"" level=info msg="" -- no changes in haproxy config\n""
10/29/2016 9:29:51 AMtime=""2016-10-29T16:29:51Z"" level=info msg=""Syncing up LB""
10/29/2016 9:29:51 AMtime=""2016-10-29T16:29:51Z"" level=info msg="" -- no changes in haproxy config\n""
```",1,lb stuck in reinitilaizing state when the environment is deactivated and server version build from master steps to reproduce the problem create an environment with services and lb service deactivate environment activate environment lb container is stuck in reinitilaizing state forever ha proxy logs amtime level info msg kubernetes url is not set skipping init of kubernetes controller amtime level info msg starting rancher lb service amtime level info msg lb controller rancher amtime level info msg lb provider haproxy amtime level info msg starting rancher controller amtime level info msg healthcheck handler is listening on amtime level info msg syncing up lb amtime level info msg staring haproxy n starting haproxy haproxy n done n amtime level info msg reloading haproxy config with the new config changes n config option forwardfor ignored for proxy default as it requires http mode n amtime level info msg syncing up lb amtime level info msg no changes in haproxy config n amtime level info msg syncing up lb amtime level info msg no changes in haproxy config n amtime level info msg received sigterm shutting down amtime level info msg shutting down rancher controller amtime level info msg shutting down provider haproxy amtime level info msg error during shutdown shutdown already in progress amtime level info msg exiting with amtime level info msg kubernetes url is not set skipping init of kubernetes controller amtime level info msg starting rancher lb service amtime level info msg lb controller rancher amtime level info msg lb provider haproxy amtime level info msg starting rancher controller amtime level info msg healthcheck handler is listening on amtime level info msg syncing up lb amtime level error msg failed to apply lb config on provider failed to wait for haproxy to exit init stage amtime level info msg syncing up lb amtime level error msg failed to apply lb config on provider failed to wait for haproxy to exit init stage amtime level info msg syncing up lb amtime level error msg failed to apply lb config on provider failed to wait for haproxy to exit init stage amtime level info msg syncing up lb amtime level error msg failed to apply lb config on provider failed to wait for haproxy to exit init stage amtime level info msg syncing up lb amtime level error msg failed to apply lb config on provider failed to wait for haproxy to exit init stage amtime level info msg syncing up lb amtime level error msg failed to apply lb config on provider failed to wait for haproxy to exit init stage amtime level info msg syncing up lb amtime level info msg staring haproxy npidfile and pid already exist n amtime level error msg failed to apply lb config on provider failed to wait for haproxy to exit init stage amtime level info msg syncing up lb amtime level info msg no changes in haproxy config n amtime level info msg syncing up lb amtime level info msg no changes in haproxy config n amtime level info msg syncing up lb amtime level info msg no changes in haproxy config n ,1
7216,24459413514.0,IssuesEvent,2022-10-07 09:44:42,o3de/o3de,https://api.github.com/repos/o3de/o3de,opened,Nightly build Bug Report: Windows periodic_test_gpu_profile job red due to timeouts,kind/bug needs-triage sig/graphics-audio kind/automation,"**Failed Jenkins Job Information:**
https://jenkins-pipeline.agscollab.com/blue/organizations/jenkins/O3DE-LY-Fork-development_periodic-incremental-daily-internal/detail/O3DE-LY-Fork-development_periodic-incremental-daily-internal/504/pipeline/799
```
[2022-10-07T06:18:57.155Z] The following tests FAILED:
[2022-10-07T06:18:57.155Z] 184 - AutomatedTesting::EditorLevelLoadingPerfTests_DX12.periodic::TEST_RUN (Failed)
[2022-10-07T06:18:57.155Z] 185 - AutomatedTesting::EditorLevelLoadingPerfTests_Vulkan.periodic::TEST_RUN (Failed)
..\..\..\..\..\..\AutomatedTesting\Gem\PythonTests\Performance\TestSuite_Periodic_DX12.py::TestAutomation::Time_EditorLevelLoading_10KEntityCpuPerfTest[windows-windows_editor-AutomatedTesting] FAILED [ 50%]
Test ABORTED after not completing within 180 seconds
..\..\..\..\..\..\AutomatedTesting\Gem\PythonTests\Performance\TestSuite_Periodic_Vulkan.py::TestAutomation::Time_EditorLevelLoading_10KEntityCpuPerfTest[windows-windows_editor-AutomatedTesting] FAILED [ 50%]
Test ABORTED after not completing within 600 seconds
```
**Attachments**
[log.txt](https://github.com/o3de/o3de/files/9732784/log.txt)",1.0,"Nightly build Bug Report: Windows periodic_test_gpu_profile job red due to timeouts - **Failed Jenkins Job Information:**
https://jenkins-pipeline.agscollab.com/blue/organizations/jenkins/O3DE-LY-Fork-development_periodic-incremental-daily-internal/detail/O3DE-LY-Fork-development_periodic-incremental-daily-internal/504/pipeline/799
```
[2022-10-07T06:18:57.155Z] The following tests FAILED:
[2022-10-07T06:18:57.155Z] 184 - AutomatedTesting::EditorLevelLoadingPerfTests_DX12.periodic::TEST_RUN (Failed)
[2022-10-07T06:18:57.155Z] 185 - AutomatedTesting::EditorLevelLoadingPerfTests_Vulkan.periodic::TEST_RUN (Failed)
..\..\..\..\..\..\AutomatedTesting\Gem\PythonTests\Performance\TestSuite_Periodic_DX12.py::TestAutomation::Time_EditorLevelLoading_10KEntityCpuPerfTest[windows-windows_editor-AutomatedTesting] FAILED [ 50%]
Test ABORTED after not completing within 180 seconds
..\..\..\..\..\..\AutomatedTesting\Gem\PythonTests\Performance\TestSuite_Periodic_Vulkan.py::TestAutomation::Time_EditorLevelLoading_10KEntityCpuPerfTest[windows-windows_editor-AutomatedTesting] FAILED [ 50%]
Test ABORTED after not completing within 600 seconds
```
**Attachments**
[log.txt](https://github.com/o3de/o3de/files/9732784/log.txt)",1,nightly build bug report windows periodic test gpu profile job red due to timeouts failed jenkins job information the following tests failed automatedtesting editorlevelloadingperftests periodic test run failed automatedtesting editorlevelloadingperftests vulkan periodic test run failed automatedtesting gem pythontests performance testsuite periodic py testautomation time editorlevelloading failed test aborted after not completing within seconds automatedtesting gem pythontests performance testsuite periodic vulkan py testautomation time editorlevelloading failed test aborted after not completing within seconds attachments ,1
139658,18853735307.0,IssuesEvent,2021-11-12 01:37:21,sesong11/example,https://api.github.com/repos/sesong11/example,opened,CVE-2019-12814 (Medium) detected in jackson-databind-2.9.9.jar,security vulnerability,"## CVE-2019-12814 - Medium Severity Vulnerability
Vulnerable Library - jackson-databind-2.9.9.jar
General data-binding functionality for Jackson: works on core streaming API
A Polymorphic Typing issue was discovered in FasterXML jackson-databind 2.x through 2.9.9. When Default Typing is enabled (either globally or for a specific property) for an externally exposed JSON endpoint and the service has JDOM 1.x or 2.x jar in the classpath, an attacker can send a specifically crafted JSON message that allows them to read arbitrary local files on the server.
***
Step up your Open Source Security Game with WhiteSource [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)",True,"CVE-2019-12814 (Medium) detected in jackson-databind-2.9.9.jar - ## CVE-2019-12814 - Medium Severity Vulnerability
Vulnerable Library - jackson-databind-2.9.9.jar
General data-binding functionality for Jackson: works on core streaming API
A Polymorphic Typing issue was discovered in FasterXML jackson-databind 2.x through 2.9.9. When Default Typing is enabled (either globally or for a specific property) for an externally exposed JSON endpoint and the service has JDOM 1.x or 2.x jar in the classpath, an attacker can send a specifically crafted JSON message that allows them to read arbitrary local files on the server.
***
Step up your Open Source Security Game with WhiteSource [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)",0,cve medium detected in jackson databind jar cve medium severity vulnerability vulnerable library jackson databind jar general data binding functionality for jackson works on core streaming api library home page a href path to dependency file example quartz jdbc pom xml path to vulnerable library root repository com fasterxml jackson core jackson databind jackson databind jar dependency hierarchy spring boot starter web release jar root library spring boot starter json release jar x jackson databind jar vulnerable library vulnerability details a polymorphic typing issue was discovered in fasterxml jackson databind x through when default typing is enabled either globally or for a specific property for an externally exposed json endpoint and the service has jdom x or x jar in the classpath an attacker can send a specifically crafted json message that allows them to read arbitrary local files on the server publish date url a href cvss score details base score metrics exploitability metrics attack vector network attack complexity high privileges required none user interaction none scope unchanged impact metrics confidentiality impact high integrity impact none availability impact none for more information on scores click a href suggested fix type upgrade version origin a href release date fix resolution step up your open source security game with whitesource ,0
4713,17316359917.0,IssuesEvent,2021-07-27 06:46:10,rancher-sandbox/cOS-toolkit,https://api.github.com/repos/rancher-sandbox/cOS-toolkit,opened,Add AMI images id list to releases artifact,automation enhancement release,"**Is your feature request related to a problem? Please describe.**
Having a clear list of the published AMIs helps users to find the image rather have to dig with the aws-cli
**Describe the solution you'd like**
An artifact which is uploaded during releasing which contains the AMI IDs
",1.0,"Add AMI images id list to releases artifact - **Is your feature request related to a problem? Please describe.**
Having a clear list of the published AMIs helps users to find the image rather have to dig with the aws-cli
**Describe the solution you'd like**
An artifact which is uploaded during releasing which contains the AMI IDs
",1,add ami images id list to releases artifact is your feature request related to a problem please describe having a clear list of the published amis helps users to find the image rather have to dig with the aws cli describe the solution you d like an artifact which is uploaded during releasing which contains the ami ids ,1
5672,20733453704.0,IssuesEvent,2022-03-14 11:35:23,SuperOfficeDocs/superoffice-docs,https://api.github.com/repos/SuperOfficeDocs/superoffice-docs,closed,Feedback for Enum values for ScreenChooserType,doc-enhancement crmscript automation,"
This list does not contain all enums, should be updated to include all Trigger events from https://docs.superoffice.com/automation/trigger/reference/CRMScript.Event.Trigger.html
---
#### Document Details
⚠ *Do not edit this section. It is required for docs.superOffice.com ➟ Docs Team processing.*
* Content Source: [enum-screenchoosertype](https://github.com/SuperOfficeDocs/superoffice-docs/blob/main/docs/database/tables/enums/screenchoosertype.md/#L1)",1.0,"Feedback for Enum values for ScreenChooserType -
This list does not contain all enums, should be updated to include all Trigger events from https://docs.superoffice.com/automation/trigger/reference/CRMScript.Event.Trigger.html
---
#### Document Details
⚠ *Do not edit this section. It is required for docs.superOffice.com ➟ Docs Team processing.*
* Content Source: [enum-screenchoosertype](https://github.com/SuperOfficeDocs/superoffice-docs/blob/main/docs/database/tables/enums/screenchoosertype.md/#L1)",1,feedback for enum values for screenchoosertype this list does not contain all enums should be updated to include all trigger events from document details ⚠ do not edit this section it is required for docs superoffice com ➟ docs team processing content source ,1
26310,19984842410.0,IssuesEvent,2022-01-30 13:54:57,yt-project/yt,https://api.github.com/repos/yt-project/yt,closed,Reduce size of pep8speaks config file,new contributor friendly infrastructure,"After https://github.com/OrkoHunter/pep8speaks/pull/106 has been merged, it looks like we can reduce the config file presence -- and reduce duplication -- for pep8speaks.
I believe it would be sufficient to remove our .pep8speaks.yml file, but we should investigate if we can remove the ""ignore"" and ""exclude"" sections and leave the bits where we define how the bot should talk.",1.0,"Reduce size of pep8speaks config file - After https://github.com/OrkoHunter/pep8speaks/pull/106 has been merged, it looks like we can reduce the config file presence -- and reduce duplication -- for pep8speaks.
I believe it would be sufficient to remove our .pep8speaks.yml file, but we should investigate if we can remove the ""ignore"" and ""exclude"" sections and leave the bits where we define how the bot should talk.",0,reduce size of config file after has been merged it looks like we can reduce the config file presence and reduce duplication for i believe it would be sufficient to remove our yml file but we should investigate if we can remove the ignore and exclude sections and leave the bits where we define how the bot should talk ,0
4883,17933343541.0,IssuesEvent,2021-09-10 12:23:37,CDCgov/prime-reportstream,https://api.github.com/repos/CDCgov/prime-reportstream,closed,add Greenlight Urgent Care to the list of senders in RS,sender-automation,what is required to add a new sender and allow them to start submitting data?,1.0,add Greenlight Urgent Care to the list of senders in RS - what is required to add a new sender and allow them to start submitting data?,1,add greenlight urgent care to the list of senders in rs what is required to add a new sender and allow them to start submitting data ,1
3209,13186175957.0,IssuesEvent,2020-08-12 23:18:57,bkthomps/Containers,https://api.github.com/repos/bkthomps/Containers,closed,Update CI/CD,automation,"Right now, it builds with coverage, then sends to codecov for analysis. Additionally, there is another tool to check code quality. Valgrind has to be run manually.
Instead, replace this with: build without coverage, run clang-tidy and valgrind with -Werror. Then, build with coverage and send to codecov. This requires a container with valgrind and codecov installed. However, this means we don't need to code quality tool (only allowed 10 code quality checks per day with that tool).",1.0,"Update CI/CD - Right now, it builds with coverage, then sends to codecov for analysis. Additionally, there is another tool to check code quality. Valgrind has to be run manually.
Instead, replace this with: build without coverage, run clang-tidy and valgrind with -Werror. Then, build with coverage and send to codecov. This requires a container with valgrind and codecov installed. However, this means we don't need to code quality tool (only allowed 10 code quality checks per day with that tool).",1,update ci cd right now it builds with coverage then sends to codecov for analysis additionally there is another tool to check code quality valgrind has to be run manually instead replace this with build without coverage run clang tidy and valgrind with werror then build with coverage and send to codecov this requires a container with valgrind and codecov installed however this means we don t need to code quality tool only allowed code quality checks per day with that tool ,1
2190,11542783297.0,IssuesEvent,2020-02-18 08:17:22,sourcegraph/sourcegraph,https://api.github.com/repos/sourcegraph/sourcegraph,opened,a8n: Support Bitbucket build status webhooks,automation,"We need to support Bibucket build status webhooks once it has been added to the plugin:
https://github.com/sourcegraph/sourcegraph/issues/8386
This issue was extracted from this larger one:
https://github.com/sourcegraph/sourcegraph/issues/7093",1.0,"a8n: Support Bitbucket build status webhooks - We need to support Bibucket build status webhooks once it has been added to the plugin:
https://github.com/sourcegraph/sourcegraph/issues/8386
This issue was extracted from this larger one:
https://github.com/sourcegraph/sourcegraph/issues/7093",1, support bitbucket build status webhooks we need to support bibucket build status webhooks once it has been added to the plugin this issue was extracted from this larger one ,1
9327,28010813531.0,IssuesEvent,2023-03-27 18:29:05,yugabyte/yugabyte-db,https://api.github.com/repos/yugabyte/yugabyte-db,closed,[Backup Restore] Restore failed for dropped colocated database ,kind/bug area/docdb priority/medium qa_automation,"Jira Link: [DB-5918](https://yugabyte.atlassian.net/browse/DB-5918)
### Description
Steps:
1. Take Backup of a Colocated DB
2. DROP the DB
3. Restore the Backup
Observed Restore failed with below error:
`2023-03-22 11:19:01,918 test_base.py:178 ERROR testysqltabletsplittingwithrpc-aws-rf3 ITEST FAILED testysqltabletsplittingwithrpc-aws-rf3 : RuntimeError('wait_for_task: Failed task with errors in 30.35816502571106s:\nFailed to execute task {""platformVersion"":""2.17.3.0-b16"",""sleepAfterMasterRestartMillis"":180000,""sleepAfterTServerRestartMillis"":180000,""nodeExporterUser"":""prometheus"",""universeUUID"":""feddcbfa-8379-4a3e-8ba7-8c9af9788fe9"",""enableYbc"":false,""installYbc"":false,""ybcInstalled"":false,""encryptionAtRestConfig"":{""encryptionAtRestEnabled"":false,""opType"":""UNDEFINED"",""type"":""DATA_KEY""},""communicationPorts"":{""masterHttpPort"":7000,""masterRpcPort"":7100,""tserverHttpPort"":9000,""tserverRpcPort"":9100,""ybControllerHttpPort"":14000,""yb..., hit error:\n\nTask id 7901e4e9-f195-4897-95d6-8402d158cac0_PGSQL_TABLE_TYPE_colocated_db status: Failed with error COMMAND_FAILED.')`
```
YW 2023-03-22T11:40:46.430Z [ERROR] c8c6cf68-4a31-4885-97b5-7909b9427f25 from TaskExecutor in TaskPool-1 - Failed to execute task type RestoreBackup UUID 16226ab6-9715-44bf-a2c0-a4bb11eba31b details {""platformVersion"":""2.17.2.0-b216"",""sleepAfterMasterRestartMillis"":180000,""sleepAfterTServerRestartMillis"":180000,""nodeExporterUser"":""prometheus"",""universeUUID"":""fa1fd3d9-cbc9-4b52-bd61-2c0b14c66063"",""enableYbc"":false,""installYbc"":false,""ybcInstalled"":false,""encryptionAtRestConfig"":{""encryptionAtRestEnabled"":false,""opType"":""UNDEFINED"",""type"":""DATA_KEY""},""communicationPorts"":{""masterHttpPort"":7000,""masterRpcPort"":7100,""tserverHttpPort"":9000,""tserverRpcPort"":9100,""ybControllerHttpPort"":14000,""ybControllerrRpcPort"":18018,""redisServerHttpPort"":11000,""redisServerRpcPort"":6379,""yqlServerHttpPort"":12000,""yqlServerRpcPort"":9042,""ysqlServerHttpPort"":13000,""ysqlServerRpcPort"":5433,""nodeExporterPort"":9300},""extraDependencies"":{""installNodeExporter"":true},""firstTry"":true,""customerUUID"":""c9c4e6e2-a640-43f7-a522-29d2f2ededbd"",""actionType"":""RESTORE"",""category"":""YB_CONTROLLER"",""backupStorageInfoList"":[{""backupType"":""PGSQL_TABLE_TYPE"",""storageLocation"":""gs://itest-backup/univ-fa1fd3d9-cbc9-4b52-bd61-2c0b14c66063/ybc_backup-2023-03-22T11:37:39-1662548782/multi-table-colocated_db"",""keyspace"":""colocated_db"",""sse"":false,""oldOwner"":""postgres""}],""prefixUUID"":""1a61a87a-ced3-48ff-a2cf-d21f56e5910b"",""currentIdx"":0,""currentYbcTaskId"":""1a61a87a-ced3-48ff-a2cf-d21f56e5910b_PGSQL_TABLE_TYPE_colocated_db"",""enableVerboseLogs"":false,""storageConfigUUID"":""0386d3e5-52f5-4b4a-b4ca-005d622e349e"",""alterLoadBalancer"":true,""disableChecksum"":false,""useTablespaces"":false,""disableMultipart"":false,""parallelism"":8,""targetXClusterConfigs"":[],""sourceXClusterConfigs"":[]}, hit error.
java.lang.RuntimeException: RestoreBackupYbc : completed 1 out of 1 tasks. failed.
at com.yugabyte.yw.commissioner.TaskExecutor$RunnableTask.runSubTasks(TaskExecutor.java:1110)
at com.yugabyte.yw.commissioner.tasks.RestoreBackup.run(RestoreBackup.java:65)
at com.yugabyte.yw.commissioner.TaskExecutor$AbstractRunnableTask.run(TaskExecutor.java:796)
at com.yugabyte.yw.commissioner.TaskExecutor$RunnableTask.run(TaskExecutor.java:1005)
at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
at java.util.concurrent.FutureTask.run(FutureTask.java:266)
at com.yugabyte.yw.common.logging.MDCAwareRunnable.run(MDCAwareRunnable.java:46)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
at java.lang.Thread.run(Thread.java:750)
Caused by: com.yugabyte.yw.common.PlatformServiceException: Task id 1a61a87a-ced3-48ff-a2cf-d21f56e5910b_PGSQL_TABLE_TYPE_colocated_db status: Failed with error COMMAND_FAILED
at com.yugabyte.yw.commissioner.YbcTaskBase.handleTaskCompleteStage(YbcTaskBase.java:95)
at com.yugabyte.yw.commissioner.YbcTaskBase.pollTaskProgress(YbcTaskBase.java:66)
at com.yugabyte.yw.commissioner.tasks.subtasks.RestoreBackupYbc.run(RestoreBackupYbc.java:179)
at com.yugabyte.yw.commissioner.TaskExecutor$AbstractRunnableTask.run(TaskExecutor.java:796)
at com.yugabyte.yw.commissioner.TaskExecutor$RunnableSubTask.run(TaskExecutor.java:1180)
... 6 common frames omitted
```
Further details in Slack Thread accessible internal to YB - https://yugabyte.slack.com/archives/C8QDREM0R/p1679490341873789
cc: @renjith-yb @kripasreenivasan
### Warning: Please confirm that this issue does not contain any sensitive information
- [X] I confirm this issue does not contain any sensitive information.
[DB-5918]: https://yugabyte.atlassian.net/browse/DB-5918?atlOrigin=eyJpIjoiNWRkNTljNzYxNjVmNDY3MDlhMDU5Y2ZhYzA5YTRkZjUiLCJwIjoiZ2l0aHViLWNvbS1KU1cifQ",1.0,"[Backup Restore] Restore failed for dropped colocated database - Jira Link: [DB-5918](https://yugabyte.atlassian.net/browse/DB-5918)
### Description
Steps:
1. Take Backup of a Colocated DB
2. DROP the DB
3. Restore the Backup
Observed Restore failed with below error:
`2023-03-22 11:19:01,918 test_base.py:178 ERROR testysqltabletsplittingwithrpc-aws-rf3 ITEST FAILED testysqltabletsplittingwithrpc-aws-rf3 : RuntimeError('wait_for_task: Failed task with errors in 30.35816502571106s:\nFailed to execute task {""platformVersion"":""2.17.3.0-b16"",""sleepAfterMasterRestartMillis"":180000,""sleepAfterTServerRestartMillis"":180000,""nodeExporterUser"":""prometheus"",""universeUUID"":""feddcbfa-8379-4a3e-8ba7-8c9af9788fe9"",""enableYbc"":false,""installYbc"":false,""ybcInstalled"":false,""encryptionAtRestConfig"":{""encryptionAtRestEnabled"":false,""opType"":""UNDEFINED"",""type"":""DATA_KEY""},""communicationPorts"":{""masterHttpPort"":7000,""masterRpcPort"":7100,""tserverHttpPort"":9000,""tserverRpcPort"":9100,""ybControllerHttpPort"":14000,""yb..., hit error:\n\nTask id 7901e4e9-f195-4897-95d6-8402d158cac0_PGSQL_TABLE_TYPE_colocated_db status: Failed with error COMMAND_FAILED.')`
```
YW 2023-03-22T11:40:46.430Z [ERROR] c8c6cf68-4a31-4885-97b5-7909b9427f25 from TaskExecutor in TaskPool-1 - Failed to execute task type RestoreBackup UUID 16226ab6-9715-44bf-a2c0-a4bb11eba31b details {""platformVersion"":""2.17.2.0-b216"",""sleepAfterMasterRestartMillis"":180000,""sleepAfterTServerRestartMillis"":180000,""nodeExporterUser"":""prometheus"",""universeUUID"":""fa1fd3d9-cbc9-4b52-bd61-2c0b14c66063"",""enableYbc"":false,""installYbc"":false,""ybcInstalled"":false,""encryptionAtRestConfig"":{""encryptionAtRestEnabled"":false,""opType"":""UNDEFINED"",""type"":""DATA_KEY""},""communicationPorts"":{""masterHttpPort"":7000,""masterRpcPort"":7100,""tserverHttpPort"":9000,""tserverRpcPort"":9100,""ybControllerHttpPort"":14000,""ybControllerrRpcPort"":18018,""redisServerHttpPort"":11000,""redisServerRpcPort"":6379,""yqlServerHttpPort"":12000,""yqlServerRpcPort"":9042,""ysqlServerHttpPort"":13000,""ysqlServerRpcPort"":5433,""nodeExporterPort"":9300},""extraDependencies"":{""installNodeExporter"":true},""firstTry"":true,""customerUUID"":""c9c4e6e2-a640-43f7-a522-29d2f2ededbd"",""actionType"":""RESTORE"",""category"":""YB_CONTROLLER"",""backupStorageInfoList"":[{""backupType"":""PGSQL_TABLE_TYPE"",""storageLocation"":""gs://itest-backup/univ-fa1fd3d9-cbc9-4b52-bd61-2c0b14c66063/ybc_backup-2023-03-22T11:37:39-1662548782/multi-table-colocated_db"",""keyspace"":""colocated_db"",""sse"":false,""oldOwner"":""postgres""}],""prefixUUID"":""1a61a87a-ced3-48ff-a2cf-d21f56e5910b"",""currentIdx"":0,""currentYbcTaskId"":""1a61a87a-ced3-48ff-a2cf-d21f56e5910b_PGSQL_TABLE_TYPE_colocated_db"",""enableVerboseLogs"":false,""storageConfigUUID"":""0386d3e5-52f5-4b4a-b4ca-005d622e349e"",""alterLoadBalancer"":true,""disableChecksum"":false,""useTablespaces"":false,""disableMultipart"":false,""parallelism"":8,""targetXClusterConfigs"":[],""sourceXClusterConfigs"":[]}, hit error.
java.lang.RuntimeException: RestoreBackupYbc : completed 1 out of 1 tasks. failed.
at com.yugabyte.yw.commissioner.TaskExecutor$RunnableTask.runSubTasks(TaskExecutor.java:1110)
at com.yugabyte.yw.commissioner.tasks.RestoreBackup.run(RestoreBackup.java:65)
at com.yugabyte.yw.commissioner.TaskExecutor$AbstractRunnableTask.run(TaskExecutor.java:796)
at com.yugabyte.yw.commissioner.TaskExecutor$RunnableTask.run(TaskExecutor.java:1005)
at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
at java.util.concurrent.FutureTask.run(FutureTask.java:266)
at com.yugabyte.yw.common.logging.MDCAwareRunnable.run(MDCAwareRunnable.java:46)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
at java.lang.Thread.run(Thread.java:750)
Caused by: com.yugabyte.yw.common.PlatformServiceException: Task id 1a61a87a-ced3-48ff-a2cf-d21f56e5910b_PGSQL_TABLE_TYPE_colocated_db status: Failed with error COMMAND_FAILED
at com.yugabyte.yw.commissioner.YbcTaskBase.handleTaskCompleteStage(YbcTaskBase.java:95)
at com.yugabyte.yw.commissioner.YbcTaskBase.pollTaskProgress(YbcTaskBase.java:66)
at com.yugabyte.yw.commissioner.tasks.subtasks.RestoreBackupYbc.run(RestoreBackupYbc.java:179)
at com.yugabyte.yw.commissioner.TaskExecutor$AbstractRunnableTask.run(TaskExecutor.java:796)
at com.yugabyte.yw.commissioner.TaskExecutor$RunnableSubTask.run(TaskExecutor.java:1180)
... 6 common frames omitted
```
Further details in Slack Thread accessible internal to YB - https://yugabyte.slack.com/archives/C8QDREM0R/p1679490341873789
cc: @renjith-yb @kripasreenivasan
### Warning: Please confirm that this issue does not contain any sensitive information
- [X] I confirm this issue does not contain any sensitive information.
[DB-5918]: https://yugabyte.atlassian.net/browse/DB-5918?atlOrigin=eyJpIjoiNWRkNTljNzYxNjVmNDY3MDlhMDU5Y2ZhYzA5YTRkZjUiLCJwIjoiZ2l0aHViLWNvbS1KU1cifQ",1, restore failed for dropped colocated database jira link description steps take backup of a colocated db drop the db restore the backup observed restore failed with below error test base py error testysqltabletsplittingwithrpc aws itest failed testysqltabletsplittingwithrpc aws runtimeerror wait for task failed task with errors in nfailed to execute task platformversion sleepaftermasterrestartmillis sleepaftertserverrestartmillis nodeexporteruser prometheus universeuuid feddcbfa enableybc false installybc false ybcinstalled false encryptionatrestconfig encryptionatrestenabled false optype undefined type data key communicationports masterhttpport masterrpcport tserverhttpport tserverrpcport ybcontrollerhttpport yb hit error n ntask id pgsql table type colocated db status failed with error command failed yw from taskexecutor in taskpool failed to execute task type restorebackup uuid details platformversion sleepaftermasterrestartmillis sleepaftertserverrestartmillis nodeexporteruser prometheus universeuuid enableybc false installybc false ybcinstalled false encryptionatrestconfig encryptionatrestenabled false optype undefined type data key communicationports masterhttpport masterrpcport tserverhttpport tserverrpcport ybcontrollerhttpport ybcontrollerrrpcport redisserverhttpport redisserverrpcport yqlserverhttpport yqlserverrpcport ysqlserverhttpport ysqlserverrpcport nodeexporterport extradependencies installnodeexporter true firsttry true customeruuid actiontype restore category yb controller backupstorageinfolist prefixuuid currentidx currentybctaskid pgsql table type colocated db enableverboselogs false storageconfiguuid alterloadbalancer true disablechecksum false usetablespaces false disablemultipart false parallelism targetxclusterconfigs sourcexclusterconfigs hit error java lang runtimeexception restorebackupybc completed out of tasks failed at com yugabyte yw commissioner taskexecutor runnabletask runsubtasks taskexecutor java at com yugabyte yw commissioner tasks restorebackup run restorebackup java at com yugabyte yw commissioner taskexecutor abstractrunnabletask run taskexecutor java at com yugabyte yw commissioner taskexecutor runnabletask run taskexecutor java at java util concurrent executors runnableadapter call executors java at java util concurrent futuretask run futuretask java at com yugabyte yw common logging mdcawarerunnable run mdcawarerunnable java at java util concurrent threadpoolexecutor runworker threadpoolexecutor java at java util concurrent threadpoolexecutor worker run threadpoolexecutor java at java lang thread run thread java caused by com yugabyte yw common platformserviceexception task id pgsql table type colocated db status failed with error command failed at com yugabyte yw commissioner ybctaskbase handletaskcompletestage ybctaskbase java at com yugabyte yw commissioner ybctaskbase polltaskprogress ybctaskbase java at com yugabyte yw commissioner tasks subtasks restorebackupybc run restorebackupybc java at com yugabyte yw commissioner taskexecutor abstractrunnabletask run taskexecutor java at com yugabyte yw commissioner taskexecutor runnablesubtask run taskexecutor java common frames omitted further details in slack thread accessible internal to yb cc renjith yb kripasreenivasan warning please confirm that this issue does not contain any sensitive information i confirm this issue does not contain any sensitive information ,1
4059,15304067613.0,IssuesEvent,2021-02-24 16:29:27,rstudio/rstudio,https://api.github.com/repos/rstudio/rstudio,opened,Add automation locator ids for the Environment tab,automation,"
I could use some ids under that environment tab for the values or other things that might show in this area.

",1.0,"Add automation locator ids for the Environment tab -
I could use some ids under that environment tab for the values or other things that might show in this area.

",1,add automation locator ids for the environment tab thanks for taking the time to file a feature request please take the time to search for an existing feature request to avoid creating duplicate requests if you find an existing feature request please give it a thumbs up reaction as we ll use these reactions to help prioritize the implementation of these features in the future if the feature has not yet been filed then please describe the feature you d like to see become a part of rstudio see for a guide on how to write good feature requests i could use some ids under that environment tab for the values or other things that might show in this area ,1
8668,27172062820.0,IssuesEvent,2023-02-17 20:25:19,OneDrive/onedrive-api-docs,https://api.github.com/repos/OneDrive/onedrive-api-docs,closed,"[7.2] File Picker for JS: TypeError ""tbody.rows.count is not a function"" (action: download)",type:bug area:Picker automation:Closed,"The OneDrive file picker for JavaScript opens, authenticates and displays my files just fine. It also lets me select a file and closes the dialog when I click “Open”. But the opener window does not receive proper file info – instead, I see these messages in the browser console:
```
[OneDriveSDK] calling xhr failure callback, status: EXCEPTION
TypeError
message: ""tbody.rows.count is not a function""
stack: ""startAjaxRequest@https://example.com/picker:56:4699
XMLHttpRequest.prototype.open@example.com/picker:56:13291
[33] Vulnerable Library - jackson-databind-2.9.2.jar
General data-binding functionality for Jackson: works on core streaming API
Path to dependency file: zaproxy/buildSrc/build.gradle.kts
Path to vulnerable library: /home/wss-scanner/.gradle/caches/modules-2/files-2.1/com.fasterxml.jackson.core/jackson-databind/2.9.2/1d8d8cb7cf26920ba57fb61fa56da88cc123b21f/jackson-databind-2.9.2.jar
A Polymorphic Typing issue was discovered in FasterXML jackson-databind 2.x before 2.9.9. When Default Typing is enabled (either globally or for a specific property) for an externally exposed JSON endpoint, the service has the mysql-connector-java jar (8.0.14 or earlier) in the classpath, and an attacker can host a crafted MySQL server reachable by the victim, an attacker can send a crafted JSON message that allows them to read arbitrary local files on the server. This occurs because of missing com.mysql.cj.jdbc.admin.MiniAdmin validation.
***
Step up your Open Source Security Game with WhiteSource [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)",True,"CVE-2019-12086 (High) detected in jackson-databind-2.9.2.jar - ## CVE-2019-12086 - High Severity Vulnerability
Vulnerable Library - jackson-databind-2.9.2.jar
General data-binding functionality for Jackson: works on core streaming API
Path to dependency file: zaproxy/buildSrc/build.gradle.kts
Path to vulnerable library: /home/wss-scanner/.gradle/caches/modules-2/files-2.1/com.fasterxml.jackson.core/jackson-databind/2.9.2/1d8d8cb7cf26920ba57fb61fa56da88cc123b21f/jackson-databind-2.9.2.jar
A Polymorphic Typing issue was discovered in FasterXML jackson-databind 2.x before 2.9.9. When Default Typing is enabled (either globally or for a specific property) for an externally exposed JSON endpoint, the service has the mysql-connector-java jar (8.0.14 or earlier) in the classpath, and an attacker can host a crafted MySQL server reachable by the victim, an attacker can send a crafted JSON message that allows them to read arbitrary local files on the server. This occurs because of missing com.mysql.cj.jdbc.admin.MiniAdmin validation.
***
Step up your Open Source Security Game with WhiteSource [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)",0,cve high detected in jackson databind jar cve high severity vulnerability vulnerable library jackson databind jar general data binding functionality for jackson works on core streaming api library home page a href path to dependency file zaproxy buildsrc build gradle kts path to vulnerable library home wss scanner gradle caches modules files com fasterxml jackson core jackson databind jackson databind jar dependency hierarchy github api jar root library x jackson databind jar vulnerable library found in head commit a href found in base branch develop vulnerability details a polymorphic typing issue was discovered in fasterxml jackson databind x before when default typing is enabled either globally or for a specific property for an externally exposed json endpoint the service has the mysql connector java jar or earlier in the classpath and an attacker can host a crafted mysql server reachable by the victim an attacker can send a crafted json message that allows them to read arbitrary local files on the server this occurs because of missing com mysql cj jdbc admin miniadmin validation publish date url a href cvss score details base score metrics exploitability metrics attack vector network attack complexity low privileges required none user interaction none scope unchanged impact metrics confidentiality impact high integrity impact none availability impact none for more information on scores click a href suggested fix type upgrade version origin a href release date fix resolution step up your open source security game with whitesource ,0
323672,27745213580.0,IssuesEvent,2023-03-15 16:35:36,delph-in/srg,https://api.github.com/repos/delph-in/srg,opened,"""Casi todos los perros ladran""",mrs testsuite,"Item 901 in the MRS test suite: the only reading available seems to be that ""all things that are almost dogs"" bark. But ""Casi"" should be linked to ""todos"", not to perros.",1.0,"""Casi todos los perros ladran"" - Item 901 in the MRS test suite: the only reading available seems to be that ""all things that are almost dogs"" bark. But ""Casi"" should be linked to ""todos"", not to perros.",0, casi todos los perros ladran item in the mrs test suite the only reading available seems to be that all things that are almost dogs bark but casi should be linked to todos not to perros ,0
3604,14121075591.0,IssuesEvent,2020-11-09 00:43:53,surge-synthesizer/surge,https://api.github.com/repos/surge-synthesizer/surge,closed,Maybe a VST3 CC issue?,Bug Report Host Automation VST3,"As @JackyLigon and I discussed on slack: (edited for clarity)
Jacky 5:25 PM
@baconpaul Probably need to dig a little deeper, but was attempting to map an Assignable Controller to a synth parameter using CC17, and I can see it - curiously - changing FX1 Return slider, which I have in no way assigned to anything. Need to look more closely though.
baconpaul 5:27 PM
VST2 or VST3?
Jacky 5:27 PM
VST3
baconpaul 5:27 PM
And latest nightly?
Jacky 5:28 PM
Surge-NIGHTLY-2020-02-12-48b6528-Setup
baconpaul 5:29 PM
OK there’s some hairy mapping which goes on in the VST3 to get CCs working properly. I had tested that pretty closely but the FX return is in the collection of params which is kinda in the ‘painful and too complicated’ range so if you find a clear example that woudl be very very very useful
Jacky 5:30 PM
As a step in trying to diagnose the MIDI mapping issue above, I've been using Reaper's JS MIDI Examiner to see CCs and values coming from the keyboard controller, just confirming CC17...
baconpaul 5:30 PM
CC17 and which channel would be useful too!
Jacky 5:34 PM
Happens actually that it was transmitting CC17 on CH 3 in this case.
baconpaul 5:34 PM
@Jacky OK I will poke at it. Lemme open an issue so I don’t forget",1.0,"Maybe a VST3 CC issue? - As @JackyLigon and I discussed on slack: (edited for clarity)
Jacky 5:25 PM
@baconpaul Probably need to dig a little deeper, but was attempting to map an Assignable Controller to a synth parameter using CC17, and I can see it - curiously - changing FX1 Return slider, which I have in no way assigned to anything. Need to look more closely though.
baconpaul 5:27 PM
VST2 or VST3?
Jacky 5:27 PM
VST3
baconpaul 5:27 PM
And latest nightly?
Jacky 5:28 PM
Surge-NIGHTLY-2020-02-12-48b6528-Setup
baconpaul 5:29 PM
OK there’s some hairy mapping which goes on in the VST3 to get CCs working properly. I had tested that pretty closely but the FX return is in the collection of params which is kinda in the ‘painful and too complicated’ range so if you find a clear example that woudl be very very very useful
Jacky 5:30 PM
As a step in trying to diagnose the MIDI mapping issue above, I've been using Reaper's JS MIDI Examiner to see CCs and values coming from the keyboard controller, just confirming CC17...
baconpaul 5:30 PM
CC17 and which channel would be useful too!
Jacky 5:34 PM
Happens actually that it was transmitting CC17 on CH 3 in this case.
baconpaul 5:34 PM
@Jacky OK I will poke at it. Lemme open an issue so I don’t forget",1,maybe a cc issue as jackyligon and i discussed on slack edited for clarity jacky pm baconpaul probably need to dig a little deeper but was attempting to map an assignable controller to a synth parameter using and i can see it curiously changing return slider which i have in no way assigned to anything need to look more closely though baconpaul pm or jacky pm baconpaul pm and latest nightly jacky pm surge nightly setup baconpaul pm ok there’s some hairy mapping which goes on in the to get ccs working properly i had tested that pretty closely but the fx return is in the collection of params which is kinda in the ‘painful and too complicated’ range so if you find a clear example that woudl be very very very useful jacky pm as a step in trying to diagnose the midi mapping issue above i ve been using reaper s js midi examiner to see ccs and values coming from the keyboard controller just confirming baconpaul pm and which channel would be useful too jacky pm happens actually that it was transmitting on ch in this case baconpaul pm jacky ok i will poke at it lemme open an issue so i don’t forget,1
8625,2875502790.0,IssuesEvent,2015-06-09 08:36:29,nilmtk/nilmtk,https://api.github.com/repos/nilmtk/nilmtk,opened,Consider keeping cache in separate file,DataStore and format conversion design Statistics and correlations,"At the moment, we modify the main dataset HDF5 file to store cached statistics. This has the advantage that the cache is kept with the data. But it has several disadvantages:
* Sometimes the HDF5 can become corrupted (e.g. in #328)
* It slightly complicates our unit tests (because we need to replace modified test files with originals)
So perhaps we should consider keeping the cache in a separate file (maybe even keeping it in the OS temporary directory)?",1.0,"Consider keeping cache in separate file - At the moment, we modify the main dataset HDF5 file to store cached statistics. This has the advantage that the cache is kept with the data. But it has several disadvantages:
* Sometimes the HDF5 can become corrupted (e.g. in #328)
* It slightly complicates our unit tests (because we need to replace modified test files with originals)
So perhaps we should consider keeping the cache in a separate file (maybe even keeping it in the OS temporary directory)?",0,consider keeping cache in separate file at the moment we modify the main dataset file to store cached statistics this has the advantage that the cache is kept with the data but it has several disadvantages sometimes the can become corrupted e g in it slightly complicates our unit tests because we need to replace modified test files with originals so perhaps we should consider keeping the cache in a separate file maybe even keeping it in the os temporary directory ,0
1932,11135065933.0,IssuesEvent,2019-12-20 13:30:30,sourcegraph/sourcegraph,https://api.github.com/repos/sourcegraph/sourcegraph,closed,a8n: Gracefully handle changeset deletion in code hosts,automation,"Some code hosts allow changesets to be deleted, others don't. For instance, GitHub pull requests can't be deleted (only closed or merged), but Bitbucket's PRs can.
To be reliable in the face of upstream deletions of changesets, the `a8n.Syncer` must be changed to handle this scenario gracefully.
My intuition is that if the changeset is deleted on the code host, we should delete it on Sourcegraph, as we do for repositories.
But that may be weird if we were the ones creating the changeset upstream and someone deleted manually. @sourcegraph/automation, @sqs: Thoughts on the desirable user experience here?",1.0,"a8n: Gracefully handle changeset deletion in code hosts - Some code hosts allow changesets to be deleted, others don't. For instance, GitHub pull requests can't be deleted (only closed or merged), but Bitbucket's PRs can.
To be reliable in the face of upstream deletions of changesets, the `a8n.Syncer` must be changed to handle this scenario gracefully.
My intuition is that if the changeset is deleted on the code host, we should delete it on Sourcegraph, as we do for repositories.
But that may be weird if we were the ones creating the changeset upstream and someone deleted manually. @sourcegraph/automation, @sqs: Thoughts on the desirable user experience here?",1, gracefully handle changeset deletion in code hosts some code hosts allow changesets to be deleted others don t for instance github pull requests can t be deleted only closed or merged but bitbucket s prs can to be reliable in the face of upstream deletions of changesets the syncer must be changed to handle this scenario gracefully my intuition is that if the changeset is deleted on the code host we should delete it on sourcegraph as we do for repositories but that may be weird if we were the ones creating the changeset upstream and someone deleted manually sourcegraph automation sqs thoughts on the desirable user experience here ,1
871,8488840804.0,IssuesEvent,2018-10-26 17:56:38,MicrosoftDocs/azure-docs,https://api.github.com/repos/MicrosoftDocs/azure-docs,closed,Outbound connectivity--to what?,assigned-to-author automation/svc doc-enhancement triaged,"States 'Outbound connectivity from the VM is required to return the results of the script'. Connectivity to what? Need IP/protocol/port info.
Also, ideally there should be an NSG Service Tag to make it easy to configure the correct connectivity.
---
#### Document Details
⚠ *Do not edit this section. It is required for docs.microsoft.com ➟ GitHub issue linking.*
* ID: b3ff94e6-3c1f-b9e1-1a46-b75fe2ffca1b
* Version Independent ID: 8f3ca735-ddd4-f4a9-4ee0-189604018784
* Content: [Run PowerShell scripts in an Windows VM in Azure](https://docs.microsoft.com/en-us/azure/virtual-machines/windows/run-command#powershell)
* Content Source: [articles/virtual-machines/windows/run-command.md](https://github.com/Microsoft/azure-docs/blob/master/articles/virtual-machines/windows/run-command.md)
* Service: **automation**
* GitHub Login: @georgewallace
* Microsoft Alias: **gwallace**",1.0,"Outbound connectivity--to what? - States 'Outbound connectivity from the VM is required to return the results of the script'. Connectivity to what? Need IP/protocol/port info.
Also, ideally there should be an NSG Service Tag to make it easy to configure the correct connectivity.
---
#### Document Details
⚠ *Do not edit this section. It is required for docs.microsoft.com ➟ GitHub issue linking.*
* ID: b3ff94e6-3c1f-b9e1-1a46-b75fe2ffca1b
* Version Independent ID: 8f3ca735-ddd4-f4a9-4ee0-189604018784
* Content: [Run PowerShell scripts in an Windows VM in Azure](https://docs.microsoft.com/en-us/azure/virtual-machines/windows/run-command#powershell)
* Content Source: [articles/virtual-machines/windows/run-command.md](https://github.com/Microsoft/azure-docs/blob/master/articles/virtual-machines/windows/run-command.md)
* Service: **automation**
* GitHub Login: @georgewallace
* Microsoft Alias: **gwallace**",1,outbound connectivity to what states outbound connectivity from the vm is required to return the results of the script connectivity to what need ip protocol port info also ideally there should be an nsg service tag to make it easy to configure the correct connectivity document details ⚠ do not edit this section it is required for docs microsoft com ➟ github issue linking id version independent id content content source service automation github login georgewallace microsoft alias gwallace ,1
23653,6461440723.0,IssuesEvent,2017-08-16 08:09:02,q2g/q2g-ext-selector,https://api.github.com/repos/q2g/q2g-ext-selector,closed,css naming issue extensionHeader.less / .css,code quality,"please fix the css name for:
q2g-ext-selector/src/lib/daVinci.js/src/directives/extensionHeader.less
",1.0,"css naming issue extensionHeader.less / .css - please fix the css name for:
q2g-ext-selector/src/lib/daVinci.js/src/directives/extensionHeader.less
",0,css naming issue extensionheader less css please fix the css name for ext selector src lib davinci js src directives extensionheader less ,0
135769,19663102182.0,IssuesEvent,2022-01-10 19:10:28,department-of-veterans-affairs/va.gov-team,https://api.github.com/repos/department-of-veterans-affairs/va.gov-team,closed,Check in with Oscar (SAHG),design vsa vsa-ebenefits SAHG,"## Background
We need to talk to Oscar
## Considerations
## Tasks
- [x] Tues - Wed: email reach out
- [ ]
## Acceptance Criteria
- [x] Oscar has been reached out to",1.0,"Check in with Oscar (SAHG) - ## Background
We need to talk to Oscar
## Considerations
## Tasks
- [x] Tues - Wed: email reach out
- [ ]
## Acceptance Criteria
- [x] Oscar has been reached out to",0,check in with oscar sahg background we need to talk to oscar considerations tasks tues wed email reach out acceptance criteria oscar has been reached out to,0
580,7314797087.0,IssuesEvent,2018-03-01 08:50:04,snowplow/iglu-central,https://api.github.com/repos/snowplow/iglu-central,opened,Add authorization SQL to DDLs generated by igluctl,automation,"Data modeling jobs are often run by a special datamodeling user in Redshift, which has limited permissions on `atomic`. When clients update a table, say because they are introducing a new version of the schema, which is already used in a datamodeling job, the datamodeling user loses its permissions, which causes the job to fail.
It would be good if schema DDLs generated with igluctl have some additional SQL at the end, along the lines of:
```
GRANT SELECT ON {{atomic}}.{{table}} TO {{datamodeling}}
```",1.0,"Add authorization SQL to DDLs generated by igluctl - Data modeling jobs are often run by a special datamodeling user in Redshift, which has limited permissions on `atomic`. When clients update a table, say because they are introducing a new version of the schema, which is already used in a datamodeling job, the datamodeling user loses its permissions, which causes the job to fail.
It would be good if schema DDLs generated with igluctl have some additional SQL at the end, along the lines of:
```
GRANT SELECT ON {{atomic}}.{{table}} TO {{datamodeling}}
```",1,add authorization sql to ddls generated by igluctl data modeling jobs are often run by a special datamodeling user in redshift which has limited permissions on atomic when clients update a table say because they are introducing a new version of the schema which is already used in a datamodeling job the datamodeling user loses its permissions which causes the job to fail it would be good if schema ddls generated with igluctl have some additional sql at the end along the lines of grant select on atomic table to datamodeling ,1
2525,12221737832.0,IssuesEvent,2020-05-02 09:37:01,krsiakdaniel/movies,https://api.github.com/repos/krsiakdaniel/movies,closed,GitHub - Apps + Actions,automation,"## actions
https://github.com/krsiakdaniel/movies/actions/new
- [x] greetings: https://github.com/krsiakdaniel/movies/pull/86
- [x] `stale.yml` config: https://github.com/krsiakdaniel/movies/commit/768aa0c8520ab1161d29f8fb80680a40a209203e
## apps
https://github.com/marketplace?type=apps
Installed:
- [x] [StaleBot](https://github.com/marketplace/stale)
- [x] imgBot: https://github.com/krsiakdaniel/movies/pull/80
- [x] Dependabot : https://github.com/krsiakdaniel/movies/pull/81",1.0,"GitHub - Apps + Actions - ## actions
https://github.com/krsiakdaniel/movies/actions/new
- [x] greetings: https://github.com/krsiakdaniel/movies/pull/86
- [x] `stale.yml` config: https://github.com/krsiakdaniel/movies/commit/768aa0c8520ab1161d29f8fb80680a40a209203e
## apps
https://github.com/marketplace?type=apps
Installed:
- [x] [StaleBot](https://github.com/marketplace/stale)
- [x] imgBot: https://github.com/krsiakdaniel/movies/pull/80
- [x] Dependabot : https://github.com/krsiakdaniel/movies/pull/81",1,github apps actions actions greetings stale yml config apps installed imgbot dependabot ,1
289032,8854296746.0,IssuesEvent,2019-01-09 00:41:33,visit-dav/issues-test,https://api.github.com/repos/visit-dav/issues-test,closed,Remove special handling for the gremlin system at LLNL since it is now a chaos 5 OS.,bug crash likelihood medium priority reviewed severity high wrong results,"Gremlin used to have a special build because it was running chaos 4, whereas all the other LLNL clusters were running chaos 5. A special build is also no longer needed for it, so its config site file should be removed and it should be removed from visitbuildclosed and visitinstallclosed, since its executables will be built on the inca system.
-----------------------REDMINE MIGRATION-----------------------
This ticket was migrated from Redmine. As such, not all
information was able to be captured in the transition. Below is
a complete record of the original redmine ticket.
Ticket number: 1390
Status: Resolved
Project: VisIt
Tracker: Bug
Priority: Urgent
Subject: Remove special handling for the gremlin system at LLNL since it is now a chaos 5 OS.
Assigned to: Eric Brugger
Category: -
Target version: 2.6.2
Author: Eric Brugger
Start: 03/20/2013
Due date:
% Done: 100%
Estimated time: 1.00 hour
Created: 03/20/2013 02:38 pm
Updated: 03/20/2013 03:24 pm
Likelihood: 3 - Occasional
Severity: 4 - Crash / Wrong Results
Found in version: 2.6.1
Impact:
Expected Use:
OS: All
Support Group: Any
Description:
Gremlin used to have a special build because it was running chaos 4, whereas all the other LLNL clusters were running chaos 5. A special build is also no longer needed for it, so its config site file should be removed and it should be removed from visitbuildclosed and visitinstallclosed, since its executables will be built on the inca system.
Comments:
I committed revisions 20579 and 20581 to the 2.6 RC and trunk with thefollowing change:1) I removed the config site file for gremlin and removed it from both visitbuildclosed and visitinstallclosed since gremlin's operating system no longer differs from other LLNL clusters. I also removed the custom coding for gremlin from the custom launcher. This resolves #1390.D configsite/gremlin3.cmakeM resources/hosts/llnl_closed/customlauncherM svn_bin/visitbuildclosedM svn_bin/visitinstall-closed
",1.0,"Remove special handling for the gremlin system at LLNL since it is now a chaos 5 OS. - Gremlin used to have a special build because it was running chaos 4, whereas all the other LLNL clusters were running chaos 5. A special build is also no longer needed for it, so its config site file should be removed and it should be removed from visitbuildclosed and visitinstallclosed, since its executables will be built on the inca system.
-----------------------REDMINE MIGRATION-----------------------
This ticket was migrated from Redmine. As such, not all
information was able to be captured in the transition. Below is
a complete record of the original redmine ticket.
Ticket number: 1390
Status: Resolved
Project: VisIt
Tracker: Bug
Priority: Urgent
Subject: Remove special handling for the gremlin system at LLNL since it is now a chaos 5 OS.
Assigned to: Eric Brugger
Category: -
Target version: 2.6.2
Author: Eric Brugger
Start: 03/20/2013
Due date:
% Done: 100%
Estimated time: 1.00 hour
Created: 03/20/2013 02:38 pm
Updated: 03/20/2013 03:24 pm
Likelihood: 3 - Occasional
Severity: 4 - Crash / Wrong Results
Found in version: 2.6.1
Impact:
Expected Use:
OS: All
Support Group: Any
Description:
Gremlin used to have a special build because it was running chaos 4, whereas all the other LLNL clusters were running chaos 5. A special build is also no longer needed for it, so its config site file should be removed and it should be removed from visitbuildclosed and visitinstallclosed, since its executables will be built on the inca system.
Comments:
I committed revisions 20579 and 20581 to the 2.6 RC and trunk with thefollowing change:1) I removed the config site file for gremlin and removed it from both visitbuildclosed and visitinstallclosed since gremlin's operating system no longer differs from other LLNL clusters. I also removed the custom coding for gremlin from the custom launcher. This resolves #1390.D configsite/gremlin3.cmakeM resources/hosts/llnl_closed/customlauncherM svn_bin/visitbuildclosedM svn_bin/visitinstall-closed
",0,remove special handling for the gremlin system at llnl since it is now a chaos os gremlin used to have a special build because it was running chaos whereas all the other llnl clusters were running chaos a special build is also no longer needed for it so its config site file should be removed and it should be removed from visitbuildclosed and visitinstallclosed since its executables will be built on the inca system redmine migration this ticket was migrated from redmine as such not all information was able to be captured in the transition below is a complete record of the original redmine ticket ticket number status resolved project visit tracker bug priority urgent subject remove special handling for the gremlin system at llnl since it is now a chaos os assigned to eric brugger category target version author eric brugger start due date done estimated time hour created pm updated pm likelihood occasional severity crash wrong results found in version impact expected use os all support group any description gremlin used to have a special build because it was running chaos whereas all the other llnl clusters were running chaos a special build is also no longer needed for it so its config site file should be removed and it should be removed from visitbuildclosed and visitinstallclosed since its executables will be built on the inca system comments i committed revisions and to the rc and trunk with thefollowing change i removed the config site file for gremlin and removed it from both visitbuildclosed and visitinstallclosed since gremlin s operating system no longer differs from other llnl clusters i also removed the custom coding for gremlin from the custom launcher this resolves d configsite cmakem resources hosts llnl closed customlauncherm svn bin visitbuildclosedm svn bin visitinstall closed ,0
8836,27172312544.0,IssuesEvent,2023-02-17 20:39:51,OneDrive/onedrive-api-docs,https://api.github.com/repos/OneDrive/onedrive-api-docs,closed,Detecting file rename kind of scenario using delta query,Needs: Attention :wave: automation:Closed,"
#### Category
- [x] Question
- [ ] Documentation issue
- [ ] Bug
What is the right way to detect whether contents of a file have been modified or file has just been renamed? In the delta query response, both are returned with etag and ctag modified.
[ ]: http://aka.ms/onedrive-api-issues
[x]: http://aka.ms/onedrive-api-issues",1.0,"Detecting file rename kind of scenario using delta query -
#### Category
- [x] Question
- [ ] Documentation issue
- [ ] Bug
What is the right way to detect whether contents of a file have been modified or file has just been renamed? In the delta query response, both are returned with etag and ctag modified.
[ ]: http://aka.ms/onedrive-api-issues
[x]: http://aka.ms/onedrive-api-issues",1,detecting file rename kind of scenario using delta query category question documentation issue bug what is the right way to detect whether contents of a file have been modified or file has just been renamed in the delta query response both are returned with etag and ctag modified ,1
8205,26453349575.0,IssuesEvent,2023-01-16 12:52:02,rancher/elemental,https://api.github.com/repos/rancher/elemental,closed,Research - Use autoscaling for self-hosted runners in public cloud,area/automation kind/QA,"Right now, we have self hosted runners in the public cloud, the machines is started and stopped on demand.
It is a good first step but when we will add more tests, we will need something better.
We can think about autoscaling the runners, it is possible with the Github API and there are some resources on how to achieve it.
For instance:
https://www.dev-eth0.de/2021/03/09/autoscaling-gitlab-runner-instances-on-google-cloud-platform/
https://medium.com/philips-technology-blog/scaling-github-action-runners-a4a45f7c67a6
https://github.blog/changelog/2021-09-20-github-actions-ephemeral-self-hosted-runners-new-webhooks-for-auto-scaling/
But it's low priority at the moment.",1.0,"Research - Use autoscaling for self-hosted runners in public cloud - Right now, we have self hosted runners in the public cloud, the machines is started and stopped on demand.
It is a good first step but when we will add more tests, we will need something better.
We can think about autoscaling the runners, it is possible with the Github API and there are some resources on how to achieve it.
For instance:
https://www.dev-eth0.de/2021/03/09/autoscaling-gitlab-runner-instances-on-google-cloud-platform/
https://medium.com/philips-technology-blog/scaling-github-action-runners-a4a45f7c67a6
https://github.blog/changelog/2021-09-20-github-actions-ephemeral-self-hosted-runners-new-webhooks-for-auto-scaling/
But it's low priority at the moment.",1,research use autoscaling for self hosted runners in public cloud right now we have self hosted runners in the public cloud the machines is started and stopped on demand it is a good first step but when we will add more tests we will need something better we can think about autoscaling the runners it is possible with the github api and there are some resources on how to achieve it for instance but it s low priority at the moment ,1
2556,12270647678.0,IssuesEvent,2020-05-07 15:49:03,bandprotocol/bandchain,https://api.github.com/repos/bandprotocol/bandchain,closed,Make sure initial 4 validators in our testnet work become data providers,automation chain,Does not need to be during genesis. Can just be a command that broadcasts signed transactions to the network.,1.0,Make sure initial 4 validators in our testnet work become data providers - Does not need to be during genesis. Can just be a command that broadcasts signed transactions to the network.,1,make sure initial validators in our testnet work become data providers does not need to be during genesis can just be a command that broadcasts signed transactions to the network ,1
3717,14406656236.0,IssuesEvent,2020-12-03 20:33:36,SynBioDex/SBOL-visual,https://api.github.com/repos/SynBioDex/SBOL-visual,closed,Summary SEP catalog,automation,"Automate generation of an SEP catalog collection in this repository, as has been done for the SBOL Data SEPs.",1.0,"Summary SEP catalog - Automate generation of an SEP catalog collection in this repository, as has been done for the SBOL Data SEPs.",1,summary sep catalog automate generation of an sep catalog collection in this repository as has been done for the sbol data seps ,1
9340,28018599256.0,IssuesEvent,2023-03-28 02:12:43,nephio-project/nephio,https://api.github.com/repos/nephio-project/nephio,opened,Implement IPAM controller ,area/package-specialization sig/automation,Implement and package IPAM controller based on the workshop prototype. ,1.0,Implement IPAM controller - Implement and package IPAM controller based on the workshop prototype. ,1,implement ipam controller implement and package ipam controller based on the workshop prototype ,1
7970,25950388333.0,IssuesEvent,2022-12-17 14:14:15,awslabs/aws-lambda-powertools-typescript,https://api.github.com/repos/awslabs/aws-lambda-powertools-typescript,closed,Maintenance: remove httpbin.org request from integration tests,area/automation type/internal status/confirmed,"### Summary
To test Tracer's capture http requests feature the integration tests have a few requests to a 3rd party service called `httpbin.org`.
At the moment the service appears to be unreachable (504). While I expect the service to come back online, it's a good moment to remove it and use another one that we have more control over.
### Why is this needed?
To remove the dependency from the 3rd party service and continue to be able to run integration tests successfully.
### Which area does this relate to?
Tests
### Solution
Change the host to which the request is made and add a timeout so that if the host is unreachable the tests will fail fast.
### Acknowledgment
- [X] This request meets [Lambda Powertools Tenets](https://awslabs.github.io/aws-lambda-powertools-typescript/latest/#tenets)
- [ ] Should this be considered in other Lambda Powertools languages? i.e. [Python](https://github.com/awslabs/aws-lambda-powertools-python/), [Java](https://github.com/awslabs/aws-lambda-powertools-java/)",1.0,"Maintenance: remove httpbin.org request from integration tests - ### Summary
To test Tracer's capture http requests feature the integration tests have a few requests to a 3rd party service called `httpbin.org`.
At the moment the service appears to be unreachable (504). While I expect the service to come back online, it's a good moment to remove it and use another one that we have more control over.
### Why is this needed?
To remove the dependency from the 3rd party service and continue to be able to run integration tests successfully.
### Which area does this relate to?
Tests
### Solution
Change the host to which the request is made and add a timeout so that if the host is unreachable the tests will fail fast.
### Acknowledgment
- [X] This request meets [Lambda Powertools Tenets](https://awslabs.github.io/aws-lambda-powertools-typescript/latest/#tenets)
- [ ] Should this be considered in other Lambda Powertools languages? i.e. [Python](https://github.com/awslabs/aws-lambda-powertools-python/), [Java](https://github.com/awslabs/aws-lambda-powertools-java/)",1,maintenance remove httpbin org request from integration tests summary to test tracer s capture http requests feature the integration tests have a few requests to a party service called httpbin org at the moment the service appears to be unreachable while i expect the service to come back online it s a good moment to remove it and use another one that we have more control over why is this needed to remove the dependency from the party service and continue to be able to run integration tests successfully which area does this relate to tests solution change the host to which the request is made and add a timeout so that if the host is unreachable the tests will fail fast acknowledgment this request meets should this be considered in other lambda powertools languages i e ,1
45055,13102425000.0,IssuesEvent,2020-08-04 06:40:54,kubesphere/kubesphere,https://api.github.com/repos/kubesphere/kubesphere,closed,Feature: add security section when creating workloads,area/security frozen,"Besides advance section, KubeSphere should add an isolated section named ‘security’, and place related options to clients, for example, ‘run as non-root’, ‘disabling mounting folders like /etc or /root’
https://mp.weixin.qq.com/s/jtDlMe5SprpZfIfXryAjzg
Actually all these options could be under advance section, but use specified ‘security’ section, could highlight it and make client understand its importance. ",True,"Feature: add security section when creating workloads - Besides advance section, KubeSphere should add an isolated section named ‘security’, and place related options to clients, for example, ‘run as non-root’, ‘disabling mounting folders like /etc or /root’
https://mp.weixin.qq.com/s/jtDlMe5SprpZfIfXryAjzg
Actually all these options could be under advance section, but use specified ‘security’ section, could highlight it and make client understand its importance. ",0,feature add security section when creating workloads besides advance section kubesphere should add an isolated section named ‘security’ and place related options to clients for example ‘run as non root’ ‘disabling mounting folders like etc or root’ actually all these options could be under advance section but use specified ‘security’ section could highlight it and make client understand its importance ,0
3077,13055138982.0,IssuesEvent,2020-07-30 00:44:32,MicrosoftDocs/azure-docs,https://api.github.com/repos/MicrosoftDocs/azure-docs,closed,is it Cluster aware?,Pri2 automation/svc cxp product-question triaged update-management/subsvc,"Does this feature has a way to know if my VMs are part of a Windows Cluster and somehow coordinate the update (kind of Cluster Aware Updating)?
I am thinking of scenarios where I have SQL FCIs or Availability Groups on Azure VMs, or scenarios where I have other type of Windows Clusters (HyperV or File Server) in Non Azure VMs
---
#### Document Details
⚠ *Do not edit this section. It is required for docs.microsoft.com ➟ GitHub issue linking.*
* ID: db47901d-1664-058d-9407-47e932dc9661
* Version Independent ID: e90ec9ee-e7da-4f19-248f-4c825aaa8b9f
* Content: [Azure Automation Update Management overview](https://docs.microsoft.com/en-us/azure/automation/automation-update-management)
* Content Source: [articles/automation/automation-update-management.md](https://github.com/MicrosoftDocs/azure-docs/blob/master/articles/automation/automation-update-management.md)
* Service: **automation**
* Sub-service: **update-management**
* GitHub Login: @MGoedtel
* Microsoft Alias: **magoedte**",1.0,"is it Cluster aware? - Does this feature has a way to know if my VMs are part of a Windows Cluster and somehow coordinate the update (kind of Cluster Aware Updating)?
I am thinking of scenarios where I have SQL FCIs or Availability Groups on Azure VMs, or scenarios where I have other type of Windows Clusters (HyperV or File Server) in Non Azure VMs
---
#### Document Details
⚠ *Do not edit this section. It is required for docs.microsoft.com ➟ GitHub issue linking.*
* ID: db47901d-1664-058d-9407-47e932dc9661
* Version Independent ID: e90ec9ee-e7da-4f19-248f-4c825aaa8b9f
* Content: [Azure Automation Update Management overview](https://docs.microsoft.com/en-us/azure/automation/automation-update-management)
* Content Source: [articles/automation/automation-update-management.md](https://github.com/MicrosoftDocs/azure-docs/blob/master/articles/automation/automation-update-management.md)
* Service: **automation**
* Sub-service: **update-management**
* GitHub Login: @MGoedtel
* Microsoft Alias: **magoedte**",1,is it cluster aware does this feature has a way to know if my vms are part of a windows cluster and somehow coordinate the update kind of cluster aware updating i am thinking of scenarios where i have sql fcis or availability groups on azure vms or scenarios where i have other type of windows clusters hyperv or file server in non azure vms document details ⚠ do not edit this section it is required for docs microsoft com ➟ github issue linking id version independent id content content source service automation sub service update management github login mgoedtel microsoft alias magoedte ,1
342692,30634563166.0,IssuesEvent,2023-07-24 16:44:39,hyperledger/cacti,https://api.github.com/repos/hyperledger/cacti,opened,chore: fix package.json#name properties to have scope and project name,bug good-first-issue Developer_Experience Significant_Change Hacktoberfest good-first-issue-400-expert Tests P1,"There are package.json manifest files in the project that have some generic name
with a high probability of future collisions.
For example in `examples/cactus-example-electricity-trade/tools/periodicExecuter/package.json`
the `name` property is set to `""periodicExecuter""` instead of `@hyperledger/cactus-example-electricity-trade-periodic-executer`
",1.0,"chore: fix package.json#name properties to have scope and project name - There are package.json manifest files in the project that have some generic name
with a high probability of future collisions.
For example in `examples/cactus-example-electricity-trade/tools/periodicExecuter/package.json`
the `name` property is set to `""periodicExecuter""` instead of `@hyperledger/cactus-example-electricity-trade-periodic-executer`
",0,chore fix package json name properties to have scope and project name there are package json manifest files in the project that have some generic name with a high probability of future collisions for example in examples cactus example electricity trade tools periodicexecuter package json the name property is set to periodicexecuter instead of hyperledger cactus example electricity trade periodic executer ,0
69965,9366551101.0,IssuesEvent,2019-04-03 01:19:56,edgedb/edgedb,https://api.github.com/repos/edgedb/edgedb,closed,improve eschema documentation,documentation,"Missing documentation and (mostly) syntax tests for:
- [x] attributes
- [x] final concepts/atoms
- [x] abstract and delegated constraints
- [x] document `ON` for constraints and indexes (in particular that it requires parens)",1.0,"improve eschema documentation - Missing documentation and (mostly) syntax tests for:
- [x] attributes
- [x] final concepts/atoms
- [x] abstract and delegated constraints
- [x] document `ON` for constraints and indexes (in particular that it requires parens)",0,improve eschema documentation missing documentation and mostly syntax tests for attributes final concepts atoms abstract and delegated constraints document on for constraints and indexes in particular that it requires parens ,0
7081,24204950798.0,IssuesEvent,2022-09-25 04:31:23,kanidm/kanidm,https://api.github.com/repos/kanidm/kanidm,opened,Debian package build failure in actions - multiple rust versions appearing,bug automation,"### I did this
Pushed to master
### I expected the following
Debian packages to build
### What happened
The github actions automation failed - somehow after installing rustc 1.64 it ran make with `1.63` -
https://github.com/kanidm/kanidm/actions/runs/3120455346/jobs/5061045245#step:6:1041
```
info: default toolchain set to 'stable-x86_64-unknown-linux-gnu'
stable-x86_64-unknown-linux-gnu installed - rustc 1.64.0 (a55dd71d5 2022-09-19)
Rust is installed now. Great!
To get started you may need to restart your current shell.
This would reload your PATH environment variable to include
Cargo's bin directory ($HOME/.cargo/bin).
```
Nek minnit
```
make[3]: Entering directory '/home/runner/build/kanidm'
cargo build -p daemon --bin kanidmd --release
error: failed to load manifest for workspace member `/home/runner/build/kanidm/kanidm_client`
Caused by:
failed to parse manifest at `/home/runner/build/kanidm/kanidm_client/Cargo.toml`
Caused by:
feature `workspace-inheritance` is required
The package requires the Cargo feature called `workspace-inheritance`, but that feature is not stabilized in this version of Cargo (1.63.0
(fd9c4297c 2022-07-01)).
```
",1.0,"Debian package build failure in actions - multiple rust versions appearing - ### I did this
Pushed to master
### I expected the following
Debian packages to build
### What happened
The github actions automation failed - somehow after installing rustc 1.64 it ran make with `1.63` -
https://github.com/kanidm/kanidm/actions/runs/3120455346/jobs/5061045245#step:6:1041
```
info: default toolchain set to 'stable-x86_64-unknown-linux-gnu'
stable-x86_64-unknown-linux-gnu installed - rustc 1.64.0 (a55dd71d5 2022-09-19)
Rust is installed now. Great!
To get started you may need to restart your current shell.
This would reload your PATH environment variable to include
Cargo's bin directory ($HOME/.cargo/bin).
```
Nek minnit
```
make[3]: Entering directory '/home/runner/build/kanidm'
cargo build -p daemon --bin kanidmd --release
error: failed to load manifest for workspace member `/home/runner/build/kanidm/kanidm_client`
Caused by:
failed to parse manifest at `/home/runner/build/kanidm/kanidm_client/Cargo.toml`
Caused by:
feature `workspace-inheritance` is required
The package requires the Cargo feature called `workspace-inheritance`, but that feature is not stabilized in this version of Cargo (1.63.0
(fd9c4297c 2022-07-01)).
```
",1,debian package build failure in actions multiple rust versions appearing i did this pushed to master i expected the following debian packages to build what happened the github actions automation failed somehow after installing rustc it ran make with info default toolchain set to stable unknown linux gnu stable unknown linux gnu installed rustc rust is installed now great to get started you may need to restart your current shell this would reload your path environment variable to include cargo s bin directory home cargo bin nek minnit make entering directory home runner build kanidm cargo build p daemon bin kanidmd release error failed to load manifest for workspace member home runner build kanidm kanidm client caused by failed to parse manifest at home runner build kanidm kanidm client cargo toml caused by feature workspace inheritance is required the package requires the cargo feature called workspace inheritance but that feature is not stabilized in this version of cargo ,1
8552,27125453371.0,IssuesEvent,2023-02-16 04:39:21,yugabyte/yugabyte-db,https://api.github.com/repos/yugabyte/yugabyte-db,opened,[YSQL][PITR][Tablegroups]: PITR timing out when trying to restore ~100 table creation inside tablegroup,area/ysql priority/high QA status/awaiting-triage pitr qa_automation blocks_automation,"### Description
PITR of table creation (~100 tables) in tablegroup is failing with following error:
`ERR=Error running restore_snapshot_schedule: Timed out (yb/rpc/outbound_call.cc:488): Failed to restore snapshot from schedule: c43048b7-85cd-4ba2-af73-0e3a6c9ce363: RestoreSnapshotSchedule RPC (request call id 4) to 10.9.193.37:7100 timed out after 60.000s`
Observed FATALs when tried to repro manually. Steps followed:
```
1. create universe with packed rows enabled
2. create database
3. create snapshot schedule on created database
4. create tablegroup
5. create 100 tables inside tablegroup
6. restore to time 2
```
FATAL details:
```
F20230215 18:40:59 ../../src/yb/master/state_with_tablets.cc:49] Invalid value of enum SysSnapshotEntryPB::State (full enum type: yb::master::SysSnapshotEntryPB_State, expression: state): RESTORED (8).
@ 0x56282cd59e27 google::LogMessage::SendToLog()
@ 0x56282cd5ad6d google::LogMessage::Flush()
@ 0x56282cd5b3e9 google::LogMessageFatal::~LogMessageFatal()
@ 0x56282e27aea1 yb::FatalInvalidEnumValueInternal()
@ 0x56282d7f1224 yb::master::(anonymous namespace)::InitialStateToTerminalState()
@ 0x56282d7f0e81 yb::master::StateWithTablets::AggregatedState()
@ 0x56282d7e9c83 yb::master::RestorationState::ToEntryPB()
@ 0x56282d7e9c36 yb::master::RestorationState::ToPB()
@ 0x56282d5349ba yb::master::enterprise::CatalogManager::ListSnapshotRestorations()
@ 0x56282d5b5751 yb::master::MasterBackupServiceImpl::ListSnapshotRestorations()
@ 0x56282d8d626a std::__1::__function::__func<>::operator()()
@ 0x56282d8d7cef yb::master::MasterBackupIf::Handle()
@ 0x56282dc7d8ce yb::rpc::ServicePoolImpl::Handle()
@ 0x56282dbbec8f yb::rpc::InboundCall::InboundCallTask::Run()
@ 0x56282dc8c243 yb::rpc::(anonymous namespace)::Worker::Execute()
@ 0x56282e305c7f yb::Thread::SuperviseThread()
@ 0x7fc19e5d4694 start_thread
@ 0x7fc19ead641d __clone
```
The FATALs issue wasn't reproable and might be intermittent but I was able to reproduce timeout issue, 8/10 times.
Version: 2.17.2.0-b109
Logs from automation: [2.17.2.0_testpitrwithtablegroups-aws-rf3_20230215_143100.zip](https://github.com/yugabyte/yugabyte-db/files/10752201/2.17.2.0_testpitrwithtablegroups-aws-rf3_20230215_143100.zip)
I can share the sql file used to create tables, it used complex datatypes.",2.0,"[YSQL][PITR][Tablegroups]: PITR timing out when trying to restore ~100 table creation inside tablegroup - ### Description
PITR of table creation (~100 tables) in tablegroup is failing with following error:
`ERR=Error running restore_snapshot_schedule: Timed out (yb/rpc/outbound_call.cc:488): Failed to restore snapshot from schedule: c43048b7-85cd-4ba2-af73-0e3a6c9ce363: RestoreSnapshotSchedule RPC (request call id 4) to 10.9.193.37:7100 timed out after 60.000s`
Observed FATALs when tried to repro manually. Steps followed:
```
1. create universe with packed rows enabled
2. create database
3. create snapshot schedule on created database
4. create tablegroup
5. create 100 tables inside tablegroup
6. restore to time 2
```
FATAL details:
```
F20230215 18:40:59 ../../src/yb/master/state_with_tablets.cc:49] Invalid value of enum SysSnapshotEntryPB::State (full enum type: yb::master::SysSnapshotEntryPB_State, expression: state): RESTORED (8).
@ 0x56282cd59e27 google::LogMessage::SendToLog()
@ 0x56282cd5ad6d google::LogMessage::Flush()
@ 0x56282cd5b3e9 google::LogMessageFatal::~LogMessageFatal()
@ 0x56282e27aea1 yb::FatalInvalidEnumValueInternal()
@ 0x56282d7f1224 yb::master::(anonymous namespace)::InitialStateToTerminalState()
@ 0x56282d7f0e81 yb::master::StateWithTablets::AggregatedState()
@ 0x56282d7e9c83 yb::master::RestorationState::ToEntryPB()
@ 0x56282d7e9c36 yb::master::RestorationState::ToPB()
@ 0x56282d5349ba yb::master::enterprise::CatalogManager::ListSnapshotRestorations()
@ 0x56282d5b5751 yb::master::MasterBackupServiceImpl::ListSnapshotRestorations()
@ 0x56282d8d626a std::__1::__function::__func<>::operator()()
@ 0x56282d8d7cef yb::master::MasterBackupIf::Handle()
@ 0x56282dc7d8ce yb::rpc::ServicePoolImpl::Handle()
@ 0x56282dbbec8f yb::rpc::InboundCall::InboundCallTask::Run()
@ 0x56282dc8c243 yb::rpc::(anonymous namespace)::Worker::Execute()
@ 0x56282e305c7f yb::Thread::SuperviseThread()
@ 0x7fc19e5d4694 start_thread
@ 0x7fc19ead641d __clone
```
The FATALs issue wasn't reproable and might be intermittent but I was able to reproduce timeout issue, 8/10 times.
Version: 2.17.2.0-b109
Logs from automation: [2.17.2.0_testpitrwithtablegroups-aws-rf3_20230215_143100.zip](https://github.com/yugabyte/yugabyte-db/files/10752201/2.17.2.0_testpitrwithtablegroups-aws-rf3_20230215_143100.zip)
I can share the sql file used to create tables, it used complex datatypes.",1, pitr timing out when trying to restore table creation inside tablegroup description pitr of table creation tables in tablegroup is failing with following error err error running restore snapshot schedule timed out yb rpc outbound call cc failed to restore snapshot from schedule restoresnapshotschedule rpc request call id to timed out after observed fatals when tried to repro manually steps followed create universe with packed rows enabled create database create snapshot schedule on created database create tablegroup create tables inside tablegroup restore to time fatal details src yb master state with tablets cc invalid value of enum syssnapshotentrypb state full enum type yb master syssnapshotentrypb state expression state restored google logmessage sendtolog google logmessage flush google logmessagefatal logmessagefatal yb fatalinvalidenumvalueinternal yb master anonymous namespace initialstatetoterminalstate yb master statewithtablets aggregatedstate yb master restorationstate toentrypb yb master restorationstate topb yb master enterprise catalogmanager listsnapshotrestorations yb master masterbackupserviceimpl listsnapshotrestorations std function func operator yb master masterbackupif handle yb rpc servicepoolimpl handle yb rpc inboundcall inboundcalltask run yb rpc anonymous namespace worker execute yb thread supervisethread start thread clone the fatals issue wasn t reproable and might be intermittent but i was able to reproduce timeout issue times version logs from automation i can share the sql file used to create tables it used complex datatypes ,1
296623,9124447293.0,IssuesEvent,2019-02-24 03:18:40,satvikpendem/Artemis,https://api.github.com/repos/satvikpendem/Artemis,opened,Create landing page animations,Platform: Landing Priority: Medium Type: Enhancement,"Currently the landing page has a video that shows the application in use, but it is outdated to the current visual style of the app. Moreover, video may be slow on certain connections.
Create and recreate such app elements in pure SVG and CSS (with minimal JS, and only for programmatic features such as dark mode toggling #17 or user sign up / login) in order to make a fast website like [Stripe](https://stripe.com). Create all future elements via SVG and CSS as well.",1.0,"Create landing page animations - Currently the landing page has a video that shows the application in use, but it is outdated to the current visual style of the app. Moreover, video may be slow on certain connections.
Create and recreate such app elements in pure SVG and CSS (with minimal JS, and only for programmatic features such as dark mode toggling #17 or user sign up / login) in order to make a fast website like [Stripe](https://stripe.com). Create all future elements via SVG and CSS as well.",0,create landing page animations currently the landing page has a video that shows the application in use but it is outdated to the current visual style of the app moreover video may be slow on certain connections create and recreate such app elements in pure svg and css with minimal js and only for programmatic features such as dark mode toggling or user sign up login in order to make a fast website like create all future elements via svg and css as well ,0
3416,13734444641.0,IssuesEvent,2020-10-05 08:42:32,DevExpress/testcafe,https://api.github.com/repos/DevExpress/testcafe,closed,Text not entered in to Stripe iFrame with Safari,BROWSER: Safari FREQUENCY: level 2 SYSTEM: automations TYPE: bug,"
### What is your Test Scenario?
I'm trying to run our test code on multiple browsers, specifically entering in CreditCard details on our payment pages.
### What is the Current behavior?
It looks like text isn't being properly entered in to the Stripe payment iframes, on Safari. The same steps are successful in Chrome, Firefox and Edge.
The main concern here is that TestCafe continues on, believing it has actually entered text in to these iframes.
### What is the Expected behavior?
Text should be entered in to Stripe iFrames, regardless of browser. OR, at the least, if TestCafe is unable to locate/interact with the iFrame... it should fail at that point instead of continuing on.
### What is your web application and your TestCafe test code?
Your website URL (or attach your complete example):
You can target our live site, since the issue relates to being unable to enter data in the payment forms:
https://www.change.org/
Your complete test code (or attach your test files):
```js
import { t, Selector } from 'testcafe';
fixture('Join Membership with credit card').page('https://www.change.org/');
test('Guest user clicks contribute directly', async () => {
await t
.navigateTo('s/member')
.expect(Selector('[data-testid=""member_landing_page_inline_contribute_button""]').visible)
.ok();
await t
.click(Selector('[data-testid=""member_landing_page_inline_contribute_button""]'), { speed: 0.3 })
.expect(Selector('[data-testid=""member_payment_form""]').visible)
.ok();
await t
.click(Selector('[data-testid=""payment-option-button-creditCard'))
.expect(Selector('.iframe-form-element').visible)
.ok();
const emailAddressInput = Selector('[data-testid=""input_email""]').filterVisible();
const confirmationEmailInput = Selector('[data-testid=""input_confirmation_email""]').filterVisible();
const firstNameInput = Selector('[data-testid=""input_first_name""]');
const lastNameInput = Selector('[data-testid=""input_last_name""]');
await t
.typeText(emailAddressInput, 'email@email.com')
.typeText(confirmationEmailInput, 'email@email.com')
.typeText(firstNameInput, 'Your')
.typeText(lastNameInput, 'Name');
await t
.switchToIframe(Selector('[data-testid=""credit-card-number""] iframe'))
.typeText(Selector('input[name=""cardnumber""]'), '1234123412341234', { replace: true })
.expect(Selector('input[name=""cardnumber""]').value)
.eql('1234 1234 1234 1234')
.switchToMainWindow();
});
```
Your complete configuration file (if any):
```
N/A
```
Your complete test report:
```
Guest user clicks contribute directly
1) AssertionError: expected '' to deeply equal '1234 1234 1234 1234'
Browser: Safari 13.0.5 / macOS 10.15.3
31 |
32 | await t
33 | .switchToIframe(Selector('[data-testid=""credit-card-number""] iframe'))
34 | .typeText(Selector('input[name=""cardnumber""]'), '1234123412341234', { replace: true
})
35 | .expect(Selector('input[name=""cardnumber""]').value)
> 36 | .eql('1234 1234 1234 1234')
37 | .switchToMainWindow();
38 |});
39 |
at
(/Users/rcooper/work/github.com/change/regression-qaa/tests/users/demo.js:36:6)
1/1 failed (23s)
```
Screenshots:
```
N/A
```
### Steps to Reproduce:
1. Go to my website ...
3. Execute this command...
4. See the error...
### Your Environment details:
* testcafe version: 1.8.2
* node.js version: 12.14.1
* command-line arguments: testcafe safari
* browser name and version: Safari 13.0.5
* platform and version: macOS 10.15.3
",1.0,"Text not entered in to Stripe iFrame with Safari -
### What is your Test Scenario?
I'm trying to run our test code on multiple browsers, specifically entering in CreditCard details on our payment pages.
### What is the Current behavior?
It looks like text isn't being properly entered in to the Stripe payment iframes, on Safari. The same steps are successful in Chrome, Firefox and Edge.
The main concern here is that TestCafe continues on, believing it has actually entered text in to these iframes.
### What is the Expected behavior?
Text should be entered in to Stripe iFrames, regardless of browser. OR, at the least, if TestCafe is unable to locate/interact with the iFrame... it should fail at that point instead of continuing on.
### What is your web application and your TestCafe test code?
Your website URL (or attach your complete example):
You can target our live site, since the issue relates to being unable to enter data in the payment forms:
https://www.change.org/
Your complete test code (or attach your test files):
```js
import { t, Selector } from 'testcafe';
fixture('Join Membership with credit card').page('https://www.change.org/');
test('Guest user clicks contribute directly', async () => {
await t
.navigateTo('s/member')
.expect(Selector('[data-testid=""member_landing_page_inline_contribute_button""]').visible)
.ok();
await t
.click(Selector('[data-testid=""member_landing_page_inline_contribute_button""]'), { speed: 0.3 })
.expect(Selector('[data-testid=""member_payment_form""]').visible)
.ok();
await t
.click(Selector('[data-testid=""payment-option-button-creditCard'))
.expect(Selector('.iframe-form-element').visible)
.ok();
const emailAddressInput = Selector('[data-testid=""input_email""]').filterVisible();
const confirmationEmailInput = Selector('[data-testid=""input_confirmation_email""]').filterVisible();
const firstNameInput = Selector('[data-testid=""input_first_name""]');
const lastNameInput = Selector('[data-testid=""input_last_name""]');
await t
.typeText(emailAddressInput, 'email@email.com')
.typeText(confirmationEmailInput, 'email@email.com')
.typeText(firstNameInput, 'Your')
.typeText(lastNameInput, 'Name');
await t
.switchToIframe(Selector('[data-testid=""credit-card-number""] iframe'))
.typeText(Selector('input[name=""cardnumber""]'), '1234123412341234', { replace: true })
.expect(Selector('input[name=""cardnumber""]').value)
.eql('1234 1234 1234 1234')
.switchToMainWindow();
});
```
Your complete configuration file (if any):
```
N/A
```
Your complete test report:
```
Guest user clicks contribute directly
1) AssertionError: expected '' to deeply equal '1234 1234 1234 1234'
Browser: Safari 13.0.5 / macOS 10.15.3
31 |
32 | await t
33 | .switchToIframe(Selector('[data-testid=""credit-card-number""] iframe'))
34 | .typeText(Selector('input[name=""cardnumber""]'), '1234123412341234', { replace: true
})
35 | .expect(Selector('input[name=""cardnumber""]').value)
> 36 | .eql('1234 1234 1234 1234')
37 | .switchToMainWindow();
38 |});
39 |
at
(/Users/rcooper/work/github.com/change/regression-qaa/tests/users/demo.js:36:6)
1/1 failed (23s)
```
Screenshots:
```
N/A
```
### Steps to Reproduce:
1. Go to my website ...
3. Execute this command...
4. See the error...
### Your Environment details:
* testcafe version: 1.8.2
* node.js version: 12.14.1
* command-line arguments: testcafe safari
* browser name and version: Safari 13.0.5
* platform and version: macOS 10.15.3
",1,text not entered in to stripe iframe with safari if you have all reproduction steps with a complete sample app please share as many details as possible in the sections below make sure that you tried using the latest testcafe version where this behavior might have been already addressed before submitting an issue please check contributing md and existing issues in this repository in case a similar issue exists or was already addressed this may save your time and ours what is your test scenario i m trying to run our test code on multiple browsers specifically entering in creditcard details on our payment pages what is the current behavior it looks like text isn t being properly entered in to the stripe payment iframes on safari the same steps are successful in chrome firefox and edge the main concern here is that testcafe continues on believing it has actually entered text in to these iframes what is the expected behavior text should be entered in to stripe iframes regardless of browser or at the least if testcafe is unable to locate interact with the iframe it should fail at that point instead of continuing on what is your web application and your testcafe test code your website url or attach your complete example you can target our live site since the issue relates to being unable to enter data in the payment forms your complete test code or attach your test files js import t selector from testcafe fixture join membership with credit card page test guest user clicks contribute directly async await t navigateto s member expect selector visible ok await t click selector speed expect selector visible ok await t click selector data testid payment option button creditcard expect selector iframe form element visible ok const emailaddressinput selector filtervisible const confirmationemailinput selector filtervisible const firstnameinput selector const lastnameinput selector await t typetext emailaddressinput email email com typetext confirmationemailinput email email com typetext firstnameinput your typetext lastnameinput name await t switchtoiframe selector iframe typetext selector input replace true expect selector input value eql switchtomainwindow your complete configuration file if any n a your complete test report guest user clicks contribute directly assertionerror expected to deeply equal browser safari macos await t switchtoiframe selector iframe typetext selector input replace true expect selector input value eql switchtomainwindow at users rcooper work github com change regression qaa tests users demo js failed screenshots n a steps to reproduce go to my website execute this command see the error your environment details testcafe version node js version command line arguments testcafe safari browser name and version safari platform and version macos ,1
31856,6650285142.0,IssuesEvent,2017-09-28 15:48:53,fieldenms/tg,https://api.github.com/repos/fieldenms/tg,closed,Entity Master: saving defects during fast entry,Defect Entity master In progress P1 Property editor UI / UX,"### Description
There are couple of significant deficiencies while entity master is quickly saved through the use of `CTRL+S` shortcut immediately after editing.
The nature of these deficiencies is more or less intermittent, however some examples are quite easy to reproduce.
----------------------------------------
a) `tg-air`: in WA's compound master for new entity, choose `CAR` in `Type` autocompleter, press `CTRL+S`; after that `Priority` becomes erroneous and focused; type `2` into `Priority` and press `CTRL+S` immediately. For the very brief period of time `Scheduled Start` becomes erroneous and focused and then the focus is moved to `Type` property and `Scheduled Start` error disappears.
b) `tg-air`: in Equipment's compound master, type several characters into `KEY` and press `CTRL+S` immediately; replay it many times (usually over ~20) and following validation error appears:
`This property has recently been changed by another user. Please either edit the value back to [HGFHGFHGFHGFHGFHGFFGHFGHHGFFGHHGGFGHFGHFHGFHGFSFGH] to resolve the conflict or cancel all of your changes.`
c) `tg-air`: in Equipment's compound master, press and hold `S` character into `KEY` and after some time press `CTRL`; a couple of client-side errors appears making entity master fully unusable:
`SimultaneousSaveException {message: ""Simultaneous save exception: the save process has been already started before and not ended. Please, block UI until the save action completes.""}`
----------------------------------------
After initial investigation and discussion it appears that saving process is started earlier and after that validation starts too. Such validation after completion replaces the results of saving, which causes situations a) and b).
Situation c) is caused by over-restrictive client-side `Simultaneous save exception`: perhaps debouncing is a good idea here very similarly to validation debouncing.
### Expected outcome
Reliable fast entry and saving in entity masters.",1.0,"Entity Master: saving defects during fast entry - ### Description
There are couple of significant deficiencies while entity master is quickly saved through the use of `CTRL+S` shortcut immediately after editing.
The nature of these deficiencies is more or less intermittent, however some examples are quite easy to reproduce.
----------------------------------------
a) `tg-air`: in WA's compound master for new entity, choose `CAR` in `Type` autocompleter, press `CTRL+S`; after that `Priority` becomes erroneous and focused; type `2` into `Priority` and press `CTRL+S` immediately. For the very brief period of time `Scheduled Start` becomes erroneous and focused and then the focus is moved to `Type` property and `Scheduled Start` error disappears.
b) `tg-air`: in Equipment's compound master, type several characters into `KEY` and press `CTRL+S` immediately; replay it many times (usually over ~20) and following validation error appears:
`This property has recently been changed by another user. Please either edit the value back to [HGFHGFHGFHGFHGFHGFFGHFGHHGFFGHHGGFGHFGHFHGFHGFSFGH] to resolve the conflict or cancel all of your changes.`
c) `tg-air`: in Equipment's compound master, press and hold `S` character into `KEY` and after some time press `CTRL`; a couple of client-side errors appears making entity master fully unusable:
`SimultaneousSaveException {message: ""Simultaneous save exception: the save process has been already started before and not ended. Please, block UI until the save action completes.""}`
----------------------------------------
After initial investigation and discussion it appears that saving process is started earlier and after that validation starts too. Such validation after completion replaces the results of saving, which causes situations a) and b).
Situation c) is caused by over-restrictive client-side `Simultaneous save exception`: perhaps debouncing is a good idea here very similarly to validation debouncing.
### Expected outcome
Reliable fast entry and saving in entity masters.",0,entity master saving defects during fast entry description there are couple of significant deficiencies while entity master is quickly saved through the use of ctrl s shortcut immediately after editing the nature of these deficiencies is more or less intermittent however some examples are quite easy to reproduce a tg air in wa s compound master for new entity choose car in type autocompleter press ctrl s after that priority becomes erroneous and focused type into priority and press ctrl s immediately for the very brief period of time scheduled start becomes erroneous and focused and then the focus is moved to type property and scheduled start error disappears b tg air in equipment s compound master type several characters into key and press ctrl s immediately replay it many times usually over and following validation error appears this property has recently been changed by another user please either edit the value back to to resolve the conflict or cancel all of your changes c tg air in equipment s compound master press and hold s character into key and after some time press ctrl a couple of client side errors appears making entity master fully unusable simultaneoussaveexception message simultaneous save exception the save process has been already started before and not ended please block ui until the save action completes after initial investigation and discussion it appears that saving process is started earlier and after that validation starts too such validation after completion replaces the results of saving which causes situations a and b situation c is caused by over restrictive client side simultaneous save exception perhaps debouncing is a good idea here very similarly to validation debouncing expected outcome reliable fast entry and saving in entity masters ,0
320642,23817400173.0,IssuesEvent,2022-09-05 08:08:28,equinor/energyvision,https://api.github.com/repos/equinor/energyvision,closed,Documentation for Editors,📄 documentation,"This document should work as the one that exists for AEM, with screenshots and explanation of components / how to use them in Sanity.
AEM example: [https://statoilsrm.sharepoint.com/sites/EditordocumentationforAEMequinorcom/_layouts/15/Doc.aspx?sourcedoc={8e53650c-64b2-41de-bd94-96204ff172a7}&action=edit&wd=target%28General.one%7Cc6904292-2ef8-408e-802e-26da4408bd77%2FUse%20Google%20Chrome%7Cc3775a1b-1816-4410-a236-fa40febda329%2F%29&wdorigin=NavigationUrl](https://statoilsrm.sharepoint.com/sites/EditordocumentationforAEMequinorcom/_layouts/15/Doc.aspx?sourcedoc=%7B8e53650c-64b2-41de-bd94-96204ff172a7%7D&action=edit&wd=target%28General.one%7Cc6904292-2ef8-408e-802e-26da4408bd77%2FUse%20Google%20Chrome%7Cc3775a1b-1816-4410-a236-fa40febda329%2F%29&wdorigin=NavigationUrl)",1.0,"Documentation for Editors - This document should work as the one that exists for AEM, with screenshots and explanation of components / how to use them in Sanity.
AEM example: [https://statoilsrm.sharepoint.com/sites/EditordocumentationforAEMequinorcom/_layouts/15/Doc.aspx?sourcedoc={8e53650c-64b2-41de-bd94-96204ff172a7}&action=edit&wd=target%28General.one%7Cc6904292-2ef8-408e-802e-26da4408bd77%2FUse%20Google%20Chrome%7Cc3775a1b-1816-4410-a236-fa40febda329%2F%29&wdorigin=NavigationUrl](https://statoilsrm.sharepoint.com/sites/EditordocumentationforAEMequinorcom/_layouts/15/Doc.aspx?sourcedoc=%7B8e53650c-64b2-41de-bd94-96204ff172a7%7D&action=edit&wd=target%28General.one%7Cc6904292-2ef8-408e-802e-26da4408bd77%2FUse%20Google%20Chrome%7Cc3775a1b-1816-4410-a236-fa40febda329%2F%29&wdorigin=NavigationUrl)",0,documentation for editors this document should work as the one that exists for aem with screenshots and explanation of components how to use them in sanity aem example ,0
2650,12399316701.0,IssuesEvent,2020-05-21 04:49:25,IBM/ibm-spectrum-scale-csi,https://api.github.com/repos/IBM/ibm-spectrum-scale-csi,closed,Daemonset descibe needed in snapshot tool,Component: Automation Phase: Field Severity: 3 Target: Driver Target: Operator Type: Enhancement good first issue,"Today the snapshot tool is missing a describe of the Daemon Set
```kubectl describe ds ibm-spectrum-scale-csi -n ibm-spectrum-scale-csi-driver```
**Describe the solution you'd like**
Add the describe to the snapshot tool for the
```kubectl describe ds ibm-spectrum-scale-csi -n ibm-spectrum-scale-csi-driver```
",1.0,"Daemonset descibe needed in snapshot tool - Today the snapshot tool is missing a describe of the Daemon Set
```kubectl describe ds ibm-spectrum-scale-csi -n ibm-spectrum-scale-csi-driver```
**Describe the solution you'd like**
Add the describe to the snapshot tool for the
```kubectl describe ds ibm-spectrum-scale-csi -n ibm-spectrum-scale-csi-driver```
",1,daemonset descibe needed in snapshot tool today the snapshot tool is missing a describe of the daemon set kubectl describe ds ibm spectrum scale csi n ibm spectrum scale csi driver describe the solution you d like add the describe to the snapshot tool for the kubectl describe ds ibm spectrum scale csi n ibm spectrum scale csi driver ,1
698659,23988150658.0,IssuesEvent,2022-09-13 21:10:12,CDCgov/prime-reportstream,https://api.github.com/repos/CDCgov/prime-reportstream,closed,PII Leakage to HHS Protect,onboarding-ops High Priority,"HHS Protect has informed us to that both Abbott and BD Veritor are putting the patient's first and last name into the `sending_facility_namespace_id` field, so they are receiving PII. I don't see that field mapped for HHS Protect, so we will need to see what field they are referring to, and then mask that field. Looking at a file for HHS Protect I do see there are names that appear in the ordering provider first and last name field sometimes, so perhaps that's the field?
We need to reach out to Kim Del Guerico and get more details.
This is high priority.",1.0,"PII Leakage to HHS Protect - HHS Protect has informed us to that both Abbott and BD Veritor are putting the patient's first and last name into the `sending_facility_namespace_id` field, so they are receiving PII. I don't see that field mapped for HHS Protect, so we will need to see what field they are referring to, and then mask that field. Looking at a file for HHS Protect I do see there are names that appear in the ordering provider first and last name field sometimes, so perhaps that's the field?
We need to reach out to Kim Del Guerico and get more details.
This is high priority.",0,pii leakage to hhs protect hhs protect has informed us to that both abbott and bd veritor are putting the patient s first and last name into the sending facility namespace id field so they are receiving pii i don t see that field mapped for hhs protect so we will need to see what field they are referring to and then mask that field looking at a file for hhs protect i do see there are names that appear in the ordering provider first and last name field sometimes so perhaps that s the field we need to reach out to kim del guerico and get more details this is high priority ,0
6034,21920365227.0,IssuesEvent,2022-05-22 13:28:36,surge-synthesizer/surge,https://api.github.com/repos/surge-synthesizer/surge,closed,FX Unit Streaming Not Stable under Changing of Int Param Bounds (esp AW type),Host Automation Bug Report FX Plugin,"**Vospi — Today at 6:56 AM**
hey guys!
when using Surge FX XT, adding airwindows effects to the list on your side (which I love) breaks previous presets of the plugin and thus breaks projects.
I suppose that's because currently it uses a knob position for it. I can clearly see that it's supposed to be ToTape6 in the preset, but it's Infinity for me now.
Thus, unless you definitely know what did you do back then, you break your legacy projects by updating.
While AIrwindows-in-a-package is a fantastic selling point for me personally, that's a huge problem for any production in my eyes.
(installed today's Nigthly)
**Robbert — Today at 7:02 AM**
I just noticed automation gestures don't seem to be working correctly. I haven't checked if they work correctly when actively dragging parameters around, but clicking/holding down on a parameter doesn't send the gesture start, and hosts thus also won't highlight the parameter in their generic UIs or automation lanes.
Filter Type
you mean saving presets host-side?
yeah probably, AW type is normalized to 0.0...1.0 range
so yeah every time we add something it's gonna end up mangled
which means we should add built in patch save/load
just like we have in Surge XT
**baconpaul — Today at 7:27 AM**
Ahh shoot
Well I can fix that I bet thank you. Yes it’s all just 0…1 save but I can test if you are an int at stream time. Automation will be hard tho
**EvilDragon — Today at 7:28 AM**
I wouldn't worry about automating that param tbh
that's Asking For Trouble (TM)
**baconpaul — Today at 7:28 AM**
Yeah no this is just at get set state time
That I would fix",1.0,"FX Unit Streaming Not Stable under Changing of Int Param Bounds (esp AW type) - **Vospi — Today at 6:56 AM**
hey guys!
when using Surge FX XT, adding airwindows effects to the list on your side (which I love) breaks previous presets of the plugin and thus breaks projects.
I suppose that's because currently it uses a knob position for it. I can clearly see that it's supposed to be ToTape6 in the preset, but it's Infinity for me now.
Thus, unless you definitely know what did you do back then, you break your legacy projects by updating.
While AIrwindows-in-a-package is a fantastic selling point for me personally, that's a huge problem for any production in my eyes.
(installed today's Nigthly)
**Robbert — Today at 7:02 AM**
I just noticed automation gestures don't seem to be working correctly. I haven't checked if they work correctly when actively dragging parameters around, but clicking/holding down on a parameter doesn't send the gesture start, and hosts thus also won't highlight the parameter in their generic UIs or automation lanes.
Filter Type
you mean saving presets host-side?
yeah probably, AW type is normalized to 0.0...1.0 range
so yeah every time we add something it's gonna end up mangled
which means we should add built in patch save/load
just like we have in Surge XT
**baconpaul — Today at 7:27 AM**
Ahh shoot
Well I can fix that I bet thank you. Yes it’s all just 0…1 save but I can test if you are an int at stream time. Automation will be hard tho
**EvilDragon — Today at 7:28 AM**
I wouldn't worry about automating that param tbh
that's Asking For Trouble (TM)
**baconpaul — Today at 7:28 AM**
Yeah no this is just at get set state time
That I would fix",1,fx unit streaming not stable under changing of int param bounds esp aw type vospi — today at am hey guys when using surge fx xt adding airwindows effects to the list on your side which i love breaks previous presets of the plugin and thus breaks projects i suppose that s because currently it uses a knob position for it i can clearly see that it s supposed to be in the preset but it s infinity for me now thus unless you definitely know what did you do back then you break your legacy projects by updating while airwindows in a package is a fantastic selling point for me personally that s a huge problem for any production in my eyes installed today s nigthly robbert — today at am i just noticed automation gestures don t seem to be working correctly i haven t checked if they work correctly when actively dragging parameters around but clicking holding down on a parameter doesn t send the gesture start and hosts thus also won t highlight the parameter in their generic uis or automation lanes filter type you mean saving presets host side yeah probably aw type is normalized to range so yeah every time we add something it s gonna end up mangled which means we should add built in patch save load just like we have in surge xt baconpaul — today at am ahh shoot well i can fix that i bet thank you yes it’s all just … save but i can test if you are an int at stream time automation will be hard tho evildragon — today at am i wouldn t worry about automating that param tbh that s asking for trouble tm baconpaul — today at am yeah no this is just at get set state time that i would fix,1
3831,14664769398.0,IssuesEvent,2020-12-29 12:47:12,modi-w/AutoVersionsDB,https://api.github.com/repos/modi-w/AutoVersionsDB,opened,"When error occure when running the file ""publish.cmd"", the error doesn't seen in the process log file.",area-automation,"**Describe the bug**
When running the file ""publish.cmd"" and an error occurred, the error doesn't see in the process log file.
**To Reproduce**
Steps to reproduce the behavior:
1. Create an error for the publish process (for example: create a compilation exception (syntax error) on the console app).
2. Rn the file ""publish.cmd"" (on the root folder)
3. check the new log created log file at the ""\automationLogs"" folder
**Action Items:**
1.
2.
3.
**Updates**
1.
",1.0,"When error occure when running the file ""publish.cmd"", the error doesn't seen in the process log file. - **Describe the bug**
When running the file ""publish.cmd"" and an error occurred, the error doesn't see in the process log file.
**To Reproduce**
Steps to reproduce the behavior:
1. Create an error for the publish process (for example: create a compilation exception (syntax error) on the console app).
2. Rn the file ""publish.cmd"" (on the root folder)
3. check the new log created log file at the ""\automationLogs"" folder
**Action Items:**
1.
2.
3.
**Updates**
1.
",1,when error occure when running the file publish cmd the error doesn t seen in the process log file describe the bug when running the file publish cmd and an error occurred the error doesn t see in the process log file to reproduce steps to reproduce the behavior create an error for the publish process for example create a compilation exception syntax error on the console app rn the file publish cmd on the root folder check the new log created log file at the automationlogs folder action items updates ,1
416031,28064475899.0,IssuesEvent,2023-03-29 14:34:40,schreiberx/sweet,https://api.github.com/repos/schreiberx/sweet,closed,Installing SWEET on macOS: miniconda not actually required for compilation,documentation,We should update [INSTALL.md](https://github.com/schreiberx/sweet/blob/master/INSTALL.md)/[INSTALL_MACOS.md](https://github.com/schreiberx/sweet/blob/master/INSTALL_MACOSX.md) to make clear that miniconda does not have to be installed when setting up SWEET (Install file for macOS recommends to install Python packages via Homebrew),1.0,Installing SWEET on macOS: miniconda not actually required for compilation - We should update [INSTALL.md](https://github.com/schreiberx/sweet/blob/master/INSTALL.md)/[INSTALL_MACOS.md](https://github.com/schreiberx/sweet/blob/master/INSTALL_MACOSX.md) to make clear that miniconda does not have to be installed when setting up SWEET (Install file for macOS recommends to install Python packages via Homebrew),0,installing sweet on macos miniconda not actually required for compilation we should update to make clear that miniconda does not have to be installed when setting up sweet install file for macos recommends to install python packages via homebrew ,0
242183,20203357821.0,IssuesEvent,2022-02-11 17:24:53,open-metadata/OpenMetadata,https://api.github.com/repos/open-metadata/OpenMetadata,closed,Revamping selenium test cases for Entity Details Page,P1 E2E-testing,"**Is your feature request related to a problem? Please describe.**
Revamping all the current test cases.
Currently, there is duplication of code and many variables are repeated for all classes.
**Describe the solution you'd like**
- Reduce duplication of code.
- Add variable to config for better control over test cases.
**Describe alternatives you've considered**
NA
**Additional context**
Entity Details Page involves the following:
- Table Details Page
- Dashboard Details Page
- Pipeline Details Page
- Topic Details Page",1.0,"Revamping selenium test cases for Entity Details Page - **Is your feature request related to a problem? Please describe.**
Revamping all the current test cases.
Currently, there is duplication of code and many variables are repeated for all classes.
**Describe the solution you'd like**
- Reduce duplication of code.
- Add variable to config for better control over test cases.
**Describe alternatives you've considered**
NA
**Additional context**
Entity Details Page involves the following:
- Table Details Page
- Dashboard Details Page
- Pipeline Details Page
- Topic Details Page",0,revamping selenium test cases for entity details page is your feature request related to a problem please describe revamping all the current test cases currently there is duplication of code and many variables are repeated for all classes describe the solution you d like reduce duplication of code add variable to config for better control over test cases describe alternatives you ve considered na additional context entity details page involves the following table details page dashboard details page pipeline details page topic details page,0
142628,11488089257.0,IssuesEvent,2020-02-11 13:17:21,joeyfrog/hooktest,https://api.github.com/repos/joeyfrog/hooktest,closed,[XRAY] Vulnerability in artifact: kkkkkkkkkkkkkkkk,test xray,"This is an automated issue made via XRAY Github webhook.
The deployed artifact(s) **['jjjjjjjjjjjjjjjjjjj', 'kkkkkkkkkkkkkkkk']** contain the following vaulnerable dependencie(s):
['ant-1.9.4.jar', 'aopalliance-repackaged-2.4.0-b09.jar', 'sdfsffseefewefwwef.jar', 'vsdvsdfsfsdfsfsfsfsdf.jar']
Here is the sent JSON from XRAY:
[
{
""created"": ""2018-03-12T19:12:06.702Z"",
""description"": ""custom-glassfish"",
""impacted_artifacts"": [
{
""depth"": 0,
""display_name"": ""test:6639"",
""infected_files"": [
{
""depth"": 0,
""display_name"": ""ant-1.9.4.jar"",
""name"": ""ant-1.9.4.jar"",
""parent_sha"": ""c9be3f74c49d2f3ea273de9c9e172ea99be696d995f31876d43185113bbe91bb"",
""path"": """",
""pkg_type"": ""Generic"",
""sha256"": ""649ae0730251de07b8913f49286d46bba7b92d47c5f332610aa426c4f02161d8""
},
{
""depth"": 0,
""display_name"": ""org.glassfish.hk2.external:aopalliance-repackaged:2.4.0-b09"",
""name"": ""aopalliance-repackaged-2.4.0-b09.jar"",
""parent_sha"": ""c9be3f74c49d2f3ea273de9c9e172ea99be696d995f31876d43185113bbe91bb"",
""path"": """",
""pkg_type"": ""Maven"",
""sha256"": ""a97667a617fa5d427c2e95ce6f3eab5cf2d21d00c69ad2a7524ff6d9a9144f58""
}
],
""name"": ""jjjjjjjjjjjjjjjjjjj"",
""parent_sha"": ""c9be3f74c49d2f3ea273de9c9e172ea99be696d995f31876d43185113bbe91bb"",
""path"": ""artifactory-xray/builds/"",
""pkg_type"": ""Build"",
""sha1"": ""737145943754ac99a678d366269dcafc205233ba"",
""sha256"": ""c9be3f74c49d2f3ea273de9c9e172ea99be696d995f31876d43185113bbe91bb""
}
],
""provider"": ""Custom"",
""severity"": ""Critical"",
""summary"": ""custom-glassfish"",
""type"": ""security""
},
{
""description"": ""Apache License 2.0"",
""impacted_artifacts"": [
{
""depth"": 0,
""display_name"": ""test:6639"",
""infected_files"": [
{
""depth"": 0,
""display_name"": ""ant-1.9.4.jar"",
""name"": ""sdfsffseefewefwwef.jar"",
""parent_sha"": ""c9be3f74c49d2f3ea273de9c9e172ea99be696d995f31876d43185113bbe91bb"",
""path"": """",
""pkg_type"": ""Generic"",
""sha256"": ""649ae0730251de07b8913f49286d46bba7b92d47c5f332610aa426c4f02161d8""
},
{
""depth"": 0,
""display_name"": ""org.glassfish.hk2.external:aopalliance-repackaged:2.4.0-b09"",
""name"": ""vsdvsdfsfsdfsfsfsfsdf.jar"",
""parent_sha"": ""c9be3f74c49d2f3ea273de9c9e172ea99be696d995f31876d43185113bbe91bb"",
""path"": """",
""pkg_type"": ""Maven"",
""sha256"": ""a97667a617fa5d427c2e95ce6f3eab5cf2d21d00c69ad2a7524ff6d9a9144f58""
}
],
""name"": ""kkkkkkkkkkkkkkkk"",
""parent_sha"": ""c9be3f74c49d2f3ea273de9c9e172ea99be696d995f31876d43185113bbe91bb"",
""path"": ""artifactory-xray/builds/"",
""pkg_type"": ""Build"",
""sha1"": ""737145943754ac99a678d366269dcafc205233ba"",
""sha256"": ""c9be3f74c49d2f3ea273de9c9e172ea99be696d995f31876d43185113bbe91bb""
}
],
""severity"": ""Critical"",
""summary"": ""Apache-2.0"",
""type"": ""License""
}
]",1.0,"[XRAY] Vulnerability in artifact: kkkkkkkkkkkkkkkk - This is an automated issue made via XRAY Github webhook.
The deployed artifact(s) **['jjjjjjjjjjjjjjjjjjj', 'kkkkkkkkkkkkkkkk']** contain the following vaulnerable dependencie(s):
['ant-1.9.4.jar', 'aopalliance-repackaged-2.4.0-b09.jar', 'sdfsffseefewefwwef.jar', 'vsdvsdfsfsdfsfsfsfsdf.jar']
Here is the sent JSON from XRAY:
[
{
""created"": ""2018-03-12T19:12:06.702Z"",
""description"": ""custom-glassfish"",
""impacted_artifacts"": [
{
""depth"": 0,
""display_name"": ""test:6639"",
""infected_files"": [
{
""depth"": 0,
""display_name"": ""ant-1.9.4.jar"",
""name"": ""ant-1.9.4.jar"",
""parent_sha"": ""c9be3f74c49d2f3ea273de9c9e172ea99be696d995f31876d43185113bbe91bb"",
""path"": """",
""pkg_type"": ""Generic"",
""sha256"": ""649ae0730251de07b8913f49286d46bba7b92d47c5f332610aa426c4f02161d8""
},
{
""depth"": 0,
""display_name"": ""org.glassfish.hk2.external:aopalliance-repackaged:2.4.0-b09"",
""name"": ""aopalliance-repackaged-2.4.0-b09.jar"",
""parent_sha"": ""c9be3f74c49d2f3ea273de9c9e172ea99be696d995f31876d43185113bbe91bb"",
""path"": """",
""pkg_type"": ""Maven"",
""sha256"": ""a97667a617fa5d427c2e95ce6f3eab5cf2d21d00c69ad2a7524ff6d9a9144f58""
}
],
""name"": ""jjjjjjjjjjjjjjjjjjj"",
""parent_sha"": ""c9be3f74c49d2f3ea273de9c9e172ea99be696d995f31876d43185113bbe91bb"",
""path"": ""artifactory-xray/builds/"",
""pkg_type"": ""Build"",
""sha1"": ""737145943754ac99a678d366269dcafc205233ba"",
""sha256"": ""c9be3f74c49d2f3ea273de9c9e172ea99be696d995f31876d43185113bbe91bb""
}
],
""provider"": ""Custom"",
""severity"": ""Critical"",
""summary"": ""custom-glassfish"",
""type"": ""security""
},
{
""description"": ""Apache License 2.0"",
""impacted_artifacts"": [
{
""depth"": 0,
""display_name"": ""test:6639"",
""infected_files"": [
{
""depth"": 0,
""display_name"": ""ant-1.9.4.jar"",
""name"": ""sdfsffseefewefwwef.jar"",
""parent_sha"": ""c9be3f74c49d2f3ea273de9c9e172ea99be696d995f31876d43185113bbe91bb"",
""path"": """",
""pkg_type"": ""Generic"",
""sha256"": ""649ae0730251de07b8913f49286d46bba7b92d47c5f332610aa426c4f02161d8""
},
{
""depth"": 0,
""display_name"": ""org.glassfish.hk2.external:aopalliance-repackaged:2.4.0-b09"",
""name"": ""vsdvsdfsfsdfsfsfsfsdf.jar"",
""parent_sha"": ""c9be3f74c49d2f3ea273de9c9e172ea99be696d995f31876d43185113bbe91bb"",
""path"": """",
""pkg_type"": ""Maven"",
""sha256"": ""a97667a617fa5d427c2e95ce6f3eab5cf2d21d00c69ad2a7524ff6d9a9144f58""
}
],
""name"": ""kkkkkkkkkkkkkkkk"",
""parent_sha"": ""c9be3f74c49d2f3ea273de9c9e172ea99be696d995f31876d43185113bbe91bb"",
""path"": ""artifactory-xray/builds/"",
""pkg_type"": ""Build"",
""sha1"": ""737145943754ac99a678d366269dcafc205233ba"",
""sha256"": ""c9be3f74c49d2f3ea273de9c9e172ea99be696d995f31876d43185113bbe91bb""
}
],
""severity"": ""Critical"",
""summary"": ""Apache-2.0"",
""type"": ""License""
}
]",0, vulnerability in artifact kkkkkkkkkkkkkkkk this is an automated issue made via xray github webhook the deployed artifact s contain the following vaulnerable dependencie s here is the sent json from xray created description custom glassfish impacted artifacts depth display name test infected files depth display name ant jar name ant jar parent sha path pkg type generic depth display name org glassfish external aopalliance repackaged name aopalliance repackaged jar parent sha path pkg type maven name jjjjjjjjjjjjjjjjjjj parent sha path artifactory xray builds pkg type build provider custom severity critical summary custom glassfish type security description apache license impacted artifacts depth display name test infected files depth display name ant jar name sdfsffseefewefwwef jar parent sha path pkg type generic depth display name org glassfish external aopalliance repackaged name vsdvsdfsfsdfsfsfsfsdf jar parent sha path pkg type maven name kkkkkkkkkkkkkkkk parent sha path artifactory xray builds pkg type build severity critical summary apache type license ,0
170335,14256098894.0,IssuesEvent,2020-11-20 00:11:01,irods/irods,https://api.github.com/repos/irods/irods,closed,"""istream write"" ignores --no-trunc when --append is present",documentation,"- [x] master
- [x] 4-2-stable
---
## Bug Report
The following options are not mutually exclusive. The `else` keyword needs to be removed so that `--no-trunc` and `--append` can be used at the same time. See [istream.cpp lines 300-309](https://github.com/irods/irods_client_icommands/blob/7560dc7f9f5b50faacc7312447155e4672709bac/src/istream.cpp#L300-L309)",1.0,"""istream write"" ignores --no-trunc when --append is present - - [x] master
- [x] 4-2-stable
---
## Bug Report
The following options are not mutually exclusive. The `else` keyword needs to be removed so that `--no-trunc` and `--append` can be used at the same time. See [istream.cpp lines 300-309](https://github.com/irods/irods_client_icommands/blob/7560dc7f9f5b50faacc7312447155e4672709bac/src/istream.cpp#L300-L309)",0, istream write ignores no trunc when append is present master stable bug report the following options are not mutually exclusive the else keyword needs to be removed so that no trunc and append can be used at the same time see ,0
73417,15254331900.0,IssuesEvent,2021-02-20 11:33:10,NixOS/nixpkgs,https://api.github.com/repos/NixOS/nixpkgs,closed,Vulnerability roundup 84: go-1.14.2: 6 advisories,1.severity: security,"[search](https://search.nix.gsc.io/?q=go&i=fosho&repos=NixOS-nixpkgs), [files](https://github.com/NixOS/nixpkgs/search?utf8=%E2%9C%93&q=go+in%3Apath&type=Code)
* [ ] [CVE-2018-17075](https://nvd.nist.gov/vuln/detail/CVE-2018-17075) CVSSv3=7.5 (nixos-unstable)
* [ ] [CVE-2018-17142](https://nvd.nist.gov/vuln/detail/CVE-2018-17142) CVSSv3=7.5 (nixos-unstable)
* [ ] [CVE-2018-17143](https://nvd.nist.gov/vuln/detail/CVE-2018-17143) CVSSv3=7.5 (nixos-unstable)
* [ ] [CVE-2018-17846](https://nvd.nist.gov/vuln/detail/CVE-2018-17846) CVSSv3=7.5 (nixos-unstable)
* [ ] [CVE-2018-17847](https://nvd.nist.gov/vuln/detail/CVE-2018-17847) CVSSv3=7.5 (nixos-unstable)
* [ ] [CVE-2018-17848](https://nvd.nist.gov/vuln/detail/CVE-2018-17848) CVSSv3=7.5 (nixos-unstable)
Scanned versions: nixos-unstable: 0f5ce2fac0c. May contain false positives.
",True,"Vulnerability roundup 84: go-1.14.2: 6 advisories - [search](https://search.nix.gsc.io/?q=go&i=fosho&repos=NixOS-nixpkgs), [files](https://github.com/NixOS/nixpkgs/search?utf8=%E2%9C%93&q=go+in%3Apath&type=Code)
* [ ] [CVE-2018-17075](https://nvd.nist.gov/vuln/detail/CVE-2018-17075) CVSSv3=7.5 (nixos-unstable)
* [ ] [CVE-2018-17142](https://nvd.nist.gov/vuln/detail/CVE-2018-17142) CVSSv3=7.5 (nixos-unstable)
* [ ] [CVE-2018-17143](https://nvd.nist.gov/vuln/detail/CVE-2018-17143) CVSSv3=7.5 (nixos-unstable)
* [ ] [CVE-2018-17846](https://nvd.nist.gov/vuln/detail/CVE-2018-17846) CVSSv3=7.5 (nixos-unstable)
* [ ] [CVE-2018-17847](https://nvd.nist.gov/vuln/detail/CVE-2018-17847) CVSSv3=7.5 (nixos-unstable)
* [ ] [CVE-2018-17848](https://nvd.nist.gov/vuln/detail/CVE-2018-17848) CVSSv3=7.5 (nixos-unstable)
Scanned versions: nixos-unstable: 0f5ce2fac0c. May contain false positives.
",0,vulnerability roundup go advisories nixos unstable nixos unstable nixos unstable nixos unstable nixos unstable nixos unstable scanned versions nixos unstable may contain false positives ,0
2924,12823331628.0,IssuesEvent,2020-07-06 11:31:10,GoodDollar/GoodDAPP,https://api.github.com/repos/GoodDollar/GoodDAPP,closed,Add options for quick re-login,automation,"@YuryAnanyev
this ticket is connected to the Profile edit issue
1. implement a click on back arrow function, to navigate back, so it doesnt require a fresh login which takes time.
2. login by setting localStorage variables localStorage.setItem('GD_mnemonic',mnemonic) or setItem('GD_masterSeed',torus user private key) and setItem('GD_isLoggedIn',true)",1.0,"Add options for quick re-login - @YuryAnanyev
this ticket is connected to the Profile edit issue
1. implement a click on back arrow function, to navigate back, so it doesnt require a fresh login which takes time.
2. login by setting localStorage variables localStorage.setItem('GD_mnemonic',mnemonic) or setItem('GD_masterSeed',torus user private key) and setItem('GD_isLoggedIn',true)",1,add options for quick re login yuryananyev this ticket is connected to the profile edit issue implement a click on back arrow function to navigate back so it doesnt require a fresh login which takes time login by setting localstorage variables localstorage setitem gd mnemonic mnemonic or setitem gd masterseed torus user private key and setitem gd isloggedin true ,1
817733,30652193665.0,IssuesEvent,2023-07-25 09:38:24,enzliguor/PokemonAO,https://api.github.com/repos/enzliguor/PokemonAO,closed,Containerizzare l'app PokemonAO,medium priority deploy feature,Scrivere un dockerfile che usi il JAR di PokemonAO per generare un container in cui girerà la nostra app,1.0,Containerizzare l'app PokemonAO - Scrivere un dockerfile che usi il JAR di PokemonAO per generare un container in cui girerà la nostra app,0,containerizzare l app pokemonao scrivere un dockerfile che usi il jar di pokemonao per generare un container in cui girerà la nostra app,0
286055,8783370558.0,IssuesEvent,2018-12-20 05:33:56,servinglynk/hslynk-open-source-docs,https://api.github.com/repos/servinglynk/hslynk-open-source-docs,closed,automated view syncing to survey edits,enhancement next priority reporting feature waiting on external resource,"Old/superceded/deleted answered versions are not removed, but marked ""question_name-old-v1"", ""question_name-old-v2"", etc..
@logicsandeep: how many hours do you think this will take to complete?",1.0,"automated view syncing to survey edits - Old/superceded/deleted answered versions are not removed, but marked ""question_name-old-v1"", ""question_name-old-v2"", etc..
@logicsandeep: how many hours do you think this will take to complete?",0,automated view syncing to survey edits old superceded deleted answered versions are not removed but marked question name old question name old etc logicsandeep how many hours do you think this will take to complete ,0
116082,11900206865.0,IssuesEvent,2020-03-30 10:15:03,Barbelot/Physarum3D,https://api.github.com/repos/Barbelot/Physarum3D,closed,Some missing steps in the instructions,documentation,"I'm gradually working through and getting to the stage where I have a working example but there is a simpler way - post a working project instead of a partial project + instructions.
As someone that downloads an unusually high number of Unity Github projects to try them out (it's how I learn best) can I share a pattern I've observer? Repos that contain an entire Unity project nearly always work. Repos that don't do this have a much higher failure rate - especially as more time passes and more bitrot sets in!
I'll post my project once I've got it working if that helps.",1.0,"Some missing steps in the instructions - I'm gradually working through and getting to the stage where I have a working example but there is a simpler way - post a working project instead of a partial project + instructions.
As someone that downloads an unusually high number of Unity Github projects to try them out (it's how I learn best) can I share a pattern I've observer? Repos that contain an entire Unity project nearly always work. Repos that don't do this have a much higher failure rate - especially as more time passes and more bitrot sets in!
I'll post my project once I've got it working if that helps.",0,some missing steps in the instructions i m gradually working through and getting to the stage where i have a working example but there is a simpler way post a working project instead of a partial project instructions as someone that downloads an unusually high number of unity github projects to try them out it s how i learn best can i share a pattern i ve observer repos that contain an entire unity project nearly always work repos that don t do this have a much higher failure rate especially as more time passes and more bitrot sets in i ll post my project once i ve got it working if that helps ,0
1656,10540413684.0,IssuesEvent,2019-10-02 08:21:21,MicrosoftDocs/azure-docs,https://api.github.com/repos/MicrosoftDocs/azure-docs,closed,"Updated script using ""Az"" module",Pri1 automation/svc cxp product-issue shared-capabilities/subsvc triaged,"There is some script that use new ""az"" module instead of ""AzureRM"" module?
I tried to replace all ""AzureRM"" module cmdlets by ""Az"" module cmdlets and got two errors:
Import-Module : The specified module 'Az.Profile' was not loaded because no valid module file was found in any module directory.
At #FILEPATH#\New-RunAsAccount.ps1:92 char:1
+ Import-Module Az.Profile
+ ~~~~~~~~~~~~~~~~~~~~~~~~
+ CategoryInfo : ResourceUnavailable: (Az.Profile:String) [Import-Module], FileNotFoundException
+ FullyQualifiedErrorId : Modules_ModuleNotFound,Microsoft.PowerShell.Commands.ImportModuleCommand
And:
#FILEPATH#\New-RunAsAccount.ps1 : Please install the latest Azure PowerShell and retry. Relevant doc url :
https://docs.microsoft.com/powershell/azureps-cmdlets-docs/
At line:1 char:1
+ .\New-RunAsAccount.ps1 -ResourceGroup $RGAutomationName -AutomationAc ...
+ ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+ CategoryInfo : NotSpecified: (:) [Write-Error], WriteErrorException
+ FullyQualifiedErrorId : Microsoft.PowerShell.Commands.WriteErrorException,New-RunAsAccount.ps1
Has anyone faced this difficulty?
---
#### Document Details
⚠ *Do not edit this section. It is required for docs.microsoft.com ➟ GitHub issue linking.*
* ID: 56e2500f-e1f5-bc87-6e5c-f41b59265049
* Version Independent ID: d212be48-7d05-847d-3045-cea82e6ba603
* Content: [Manage Azure Automation Run As accounts](https://docs.microsoft.com/en-us/azure/automation/manage-runas-account#feedback)
* Content Source: [articles/automation/manage-runas-account.md](https://github.com/Microsoft/azure-docs/blob/master/articles/automation/manage-runas-account.md)
* Service: **automation**
* Sub-service: **shared-capabilities**
* GitHub Login: @bobbytreed
* Microsoft Alias: **robreed**",1.0,"Updated script using ""Az"" module - There is some script that use new ""az"" module instead of ""AzureRM"" module?
I tried to replace all ""AzureRM"" module cmdlets by ""Az"" module cmdlets and got two errors:
Import-Module : The specified module 'Az.Profile' was not loaded because no valid module file was found in any module directory.
At #FILEPATH#\New-RunAsAccount.ps1:92 char:1
+ Import-Module Az.Profile
+ ~~~~~~~~~~~~~~~~~~~~~~~~
+ CategoryInfo : ResourceUnavailable: (Az.Profile:String) [Import-Module], FileNotFoundException
+ FullyQualifiedErrorId : Modules_ModuleNotFound,Microsoft.PowerShell.Commands.ImportModuleCommand
And:
#FILEPATH#\New-RunAsAccount.ps1 : Please install the latest Azure PowerShell and retry. Relevant doc url :
https://docs.microsoft.com/powershell/azureps-cmdlets-docs/
At line:1 char:1
+ .\New-RunAsAccount.ps1 -ResourceGroup $RGAutomationName -AutomationAc ...
+ ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+ CategoryInfo : NotSpecified: (:) [Write-Error], WriteErrorException
+ FullyQualifiedErrorId : Microsoft.PowerShell.Commands.WriteErrorException,New-RunAsAccount.ps1
Has anyone faced this difficulty?
---
#### Document Details
⚠ *Do not edit this section. It is required for docs.microsoft.com ➟ GitHub issue linking.*
* ID: 56e2500f-e1f5-bc87-6e5c-f41b59265049
* Version Independent ID: d212be48-7d05-847d-3045-cea82e6ba603
* Content: [Manage Azure Automation Run As accounts](https://docs.microsoft.com/en-us/azure/automation/manage-runas-account#feedback)
* Content Source: [articles/automation/manage-runas-account.md](https://github.com/Microsoft/azure-docs/blob/master/articles/automation/manage-runas-account.md)
* Service: **automation**
* Sub-service: **shared-capabilities**
* GitHub Login: @bobbytreed
* Microsoft Alias: **robreed**",1,updated script using az module there is some script that use new az module instead of azurerm module i tried to replace all azurerm module cmdlets by az module cmdlets and got two errors import module the specified module az profile was not loaded because no valid module file was found in any module directory at filepath new runasaccount char import module az profile categoryinfo resourceunavailable az profile string filenotfoundexception fullyqualifiederrorid modules modulenotfound microsoft powershell commands importmodulecommand and filepath new runasaccount please install the latest azure powershell and retry relevant doc url at line char new runasaccount resourcegroup rgautomationname automationac categoryinfo notspecified writeerrorexception fullyqualifiederrorid microsoft powershell commands writeerrorexception new runasaccount has anyone faced this difficulty document details ⚠ do not edit this section it is required for docs microsoft com ➟ github issue linking id version independent id content content source service automation sub service shared capabilities github login bobbytreed microsoft alias robreed ,1
440512,12700953023.0,IssuesEvent,2020-06-22 17:14:37,ansible/awx,https://api.github.com/repos/ansible/awx,opened,Add edit button to Organization -> Teams list rows,component:ui_next priority:medium state:needs_devel type:enhancement,"##### ISSUE TYPE
- Feature Idea
##### SUMMARY
If the user has the proper permissions to edit a team we should provide them with an edit button on this list:
which would redirect them to `/#/teams/:id/edit`.
This would be consistent with list behavior throughout the app.
",1.0,"Add edit button to Organization -> Teams list rows - ##### ISSUE TYPE
- Feature Idea
##### SUMMARY
If the user has the proper permissions to edit a team we should provide them with an edit button on this list:
which would redirect them to `/#/teams/:id/edit`.
This would be consistent with list behavior throughout the app.
",0,add edit button to organization teams list rows issue type feature idea summary if the user has the proper permissions to edit a team we should provide them with an edit button on this list img width alt screen shot at pm src which would redirect them to teams id edit this would be consistent with list behavior throughout the app ,0
34064,9257080833.0,IssuesEvent,2019-03-17 01:47:56,SHPEUCF/shpeucfapp,https://api.github.com/repos/SHPEUCF/shpeucfapp,closed,Build Leaderboard scene,Build,"This scene must:
- Display number of points for all users/members.
- Show is descending order (most points on top)
- Display bar with annotations like: user name, #points
- Eventually, we can also allow users to tap on the bar and see which events the earned the points from, like GBM 5 points, attending conference 5 points, etc.
The functionality must be:
- User gets points as they check in to events through the calendar event
- Those points and its corresponding data get stored in the database under users data.
- Leaderboard must keep watching changes/get update when user gets point and render the updated data on the Leaderboard scene",1.0,"Build Leaderboard scene - This scene must:
- Display number of points for all users/members.
- Show is descending order (most points on top)
- Display bar with annotations like: user name, #points
- Eventually, we can also allow users to tap on the bar and see which events the earned the points from, like GBM 5 points, attending conference 5 points, etc.
The functionality must be:
- User gets points as they check in to events through the calendar event
- Those points and its corresponding data get stored in the database under users data.
- Leaderboard must keep watching changes/get update when user gets point and render the updated data on the Leaderboard scene",0,build leaderboard scene this scene must display number of points for all users members show is descending order most points on top display bar with annotations like user name points eventually we can also allow users to tap on the bar and see which events the earned the points from like gbm points attending conference points etc the functionality must be user gets points as they check in to events through the calendar event those points and its corresponding data get stored in the database under users data leaderboard must keep watching changes get update when user gets point and render the updated data on the leaderboard scene,0
558899,16544227072.0,IssuesEvent,2021-05-27 21:10:40,returntocorp/semgrep,https://api.github.com/repos/returntocorp/semgrep,closed,Add support for typed metavariables in Javascript,enhancement lang:javascript pattern:types priority:low stale,"We support typed metavariables for statically typed languages like Java and Go. Javascript is more difficult, but it would be nice to have!",1.0,"Add support for typed metavariables in Javascript - We support typed metavariables for statically typed languages like Java and Go. Javascript is more difficult, but it would be nice to have!",0,add support for typed metavariables in javascript we support typed metavariables for statically typed languages like java and go javascript is more difficult but it would be nice to have ,0
5054,18403131306.0,IssuesEvent,2021-10-12 18:40:44,CDCgov/prime-field-teams,https://api.github.com/repos/CDCgov/prime-field-teams,opened,Solution Imp. - Plan & Schedule GO Live,sender-automation,"**Main Objectives & Tasks:**
- [ ] Contact RS Pipeline Team to turn on HHSProtect reporting.
- [ ] Determine if there is a backlog of results. If so, contact the SPHD to determine what results are needed. Example, in AL, if a Sender has only reported positives only through the ADPH Report Card, then ADPH wants all COVID (positive & negative) since the Sender started testing.
- [ ] Coordinate and schedule a start date for daily results and backlog results between the SPHD and the Sender.
",1.0,"Solution Imp. - Plan & Schedule GO Live - **Main Objectives & Tasks:**
- [ ] Contact RS Pipeline Team to turn on HHSProtect reporting.
- [ ] Determine if there is a backlog of results. If so, contact the SPHD to determine what results are needed. Example, in AL, if a Sender has only reported positives only through the ADPH Report Card, then ADPH wants all COVID (positive & negative) since the Sender started testing.
- [ ] Coordinate and schedule a start date for daily results and backlog results between the SPHD and the Sender.
",1,solution imp plan schedule go live main objectives tasks contact rs pipeline team to turn on hhsprotect reporting determine if there is a backlog of results if so contact the sphd to determine what results are needed example in al if a sender has only reported positives only through the adph report card then adph wants all covid positive negative since the sender started testing coordinate and schedule a start date for daily results and backlog results between the sphd and the sender ,1
219826,17114104765.0,IssuesEvent,2021-07-11 00:42:26,backend-br/vagas,https://api.github.com/repos/backend-br/vagas,closed,[REMOTO] Java Backend Engineer Specialist na AgileProcess,AWS CI CLT Docker Especialista Git Java MySQL Remoto Scrum Stale Testes Unitários startup,"## Nossa empresa
A **AgileProcess** é uma startup criada para simplificar o processo logístico, tornando-o muito mais eficiente do início ao fim. Junto com pessoas que buscam tornar a logística cada vez mais digital e otimizada, a empresa utiliza as melhores tecnologias do mercado em busca de entregar sempre mais e melhor.
Fundada em 2014 e com sede em Florianópolis, a **AgileProcess** está em constante crescimento e busca por criatividade, conhecimento, pessoas apaixonadas por inovação e tecnologia e o desejo de tornar a logística 100% digital em uma realidade.
O QUE FAZEMOS?
Utilizando as melhores tecnologias do mercado, o sistema **AgileProces**s otimiza o uso da frota, propõe as melhores rotas e sequenciamento de entregas e coletas, auxilia cada motorista, mostrando o percurso com apoio de GPS e faz a comprovação de entregas no exato momento em que forem realizadas.
Hoje, mais de 9 milhões de entregas e coletas passam no software da **AgileProcess** por mês, presente em mais de 4.600 cidades pelo Brasil.
## Descrição da vaga
Buscamos um(a) **Backend Engineer Specialist** que será responsável, junto ao nossos squads de desenvolvimento, por prover a melhor experiência para nossos clientes através de nossas soluções.
RESPONSABILIDADES E ATRIBUIÇÕES
- Desafiar o status quo e desenvolver soluções inovadoras para problemas complexos;
- Desenvolver e manter nossos Microserviços de forma ágil, aplicando boas práticas de Engenharia de Software;
- Contribuir com o desenvolvimento e arquitetura da plataforma, preparando-a para um crescimento acelerado;
- Construir uma base sólida para o desenvolvimento de novos produtos;
- Desenvolver sistemas escaláveis, sustentáveis e orientados ao usuário;
- Trabalhar em um ambiente que estimula e valoriza a autonomia e a transparência;
- Ajudar o crescimento do time de tecnologia e engenharia.
## Local
100% remoto. Estamos localizados em Florianópolis - Santa Catarina.
## Requisitos
- Experiência e conhecimento profundo com desenvolvimento Java 8;
- Experiência e conhecimento em GitFlow;
- Experiência e conhecimento profundo em Docker;
- Ter atuado na construção de testes unitários e integrados;
- Ter atuado na construção de testes de comportamento (BDD);
- Conhecimentos em Design Patterns, arquitetura e engenharia de software;
- Conhecimentos em GitLab CI;
- Conhecimentos em metodologias ágeis de desenvolvimento (Scrum, Kanban).
Nosso Stack:
- Java;
- MySQL;
- AWS;
- Git (GitLab).
## Benefícios
- Onboarding de boas-vindas!
- “All Hands”: nosso encontro semanal com o CEO;
- Dress Code: seja você mesmo(a);
- Flexibilidade de horário;
- VR/VA Flex: R$ 550,00 (mês);
- Plano de saúde (para você e quem você ama);
- Plano odontológico;
- TotalPass;
- Clube de Descontos - NewValue;
- PLR;
- Parceria com ZenKlub;
e muito mais!
## Contratação
CLT.
Salário: R$ 12.500,00 - R$ 14.000,00
Nível: Especialista
## Como se candidatar
Por favor envie um email para ana.felauto@agileprocess.com.br OU;
Candidate-se pela nossa página de carreiras - https://agileprocess.gupy.io/jobs/864763?jobBoardSource=gupy_public_page OU;
Me chame no whatsapp: +554898835995
## Tempo médio de feedbacks
Costumamos enviar feedbacks em até 03 dias após cada processo.
E-mail para contato em caso de não haver resposta: ana.felauto@agileprocess.com.br
## Labels
#### Alocação
- Remoto
#### Regime
- CLT
#### Nível
- Especialista
",1.0,"[REMOTO] Java Backend Engineer Specialist na AgileProcess - ## Nossa empresa
A **AgileProcess** é uma startup criada para simplificar o processo logístico, tornando-o muito mais eficiente do início ao fim. Junto com pessoas que buscam tornar a logística cada vez mais digital e otimizada, a empresa utiliza as melhores tecnologias do mercado em busca de entregar sempre mais e melhor.
Fundada em 2014 e com sede em Florianópolis, a **AgileProcess** está em constante crescimento e busca por criatividade, conhecimento, pessoas apaixonadas por inovação e tecnologia e o desejo de tornar a logística 100% digital em uma realidade.
O QUE FAZEMOS?
Utilizando as melhores tecnologias do mercado, o sistema **AgileProces**s otimiza o uso da frota, propõe as melhores rotas e sequenciamento de entregas e coletas, auxilia cada motorista, mostrando o percurso com apoio de GPS e faz a comprovação de entregas no exato momento em que forem realizadas.
Hoje, mais de 9 milhões de entregas e coletas passam no software da **AgileProcess** por mês, presente em mais de 4.600 cidades pelo Brasil.
## Descrição da vaga
Buscamos um(a) **Backend Engineer Specialist** que será responsável, junto ao nossos squads de desenvolvimento, por prover a melhor experiência para nossos clientes através de nossas soluções.
RESPONSABILIDADES E ATRIBUIÇÕES
- Desafiar o status quo e desenvolver soluções inovadoras para problemas complexos;
- Desenvolver e manter nossos Microserviços de forma ágil, aplicando boas práticas de Engenharia de Software;
- Contribuir com o desenvolvimento e arquitetura da plataforma, preparando-a para um crescimento acelerado;
- Construir uma base sólida para o desenvolvimento de novos produtos;
- Desenvolver sistemas escaláveis, sustentáveis e orientados ao usuário;
- Trabalhar em um ambiente que estimula e valoriza a autonomia e a transparência;
- Ajudar o crescimento do time de tecnologia e engenharia.
## Local
100% remoto. Estamos localizados em Florianópolis - Santa Catarina.
## Requisitos
- Experiência e conhecimento profundo com desenvolvimento Java 8;
- Experiência e conhecimento em GitFlow;
- Experiência e conhecimento profundo em Docker;
- Ter atuado na construção de testes unitários e integrados;
- Ter atuado na construção de testes de comportamento (BDD);
- Conhecimentos em Design Patterns, arquitetura e engenharia de software;
- Conhecimentos em GitLab CI;
- Conhecimentos em metodologias ágeis de desenvolvimento (Scrum, Kanban).
Nosso Stack:
- Java;
- MySQL;
- AWS;
- Git (GitLab).
## Benefícios
- Onboarding de boas-vindas!
- “All Hands”: nosso encontro semanal com o CEO;
- Dress Code: seja você mesmo(a);
- Flexibilidade de horário;
- VR/VA Flex: R$ 550,00 (mês);
- Plano de saúde (para você e quem você ama);
- Plano odontológico;
- TotalPass;
- Clube de Descontos - NewValue;
- PLR;
- Parceria com ZenKlub;
e muito mais!
## Contratação
CLT.
Salário: R$ 12.500,00 - R$ 14.000,00
Nível: Especialista
## Como se candidatar
Por favor envie um email para ana.felauto@agileprocess.com.br OU;
Candidate-se pela nossa página de carreiras - https://agileprocess.gupy.io/jobs/864763?jobBoardSource=gupy_public_page OU;
Me chame no whatsapp: +554898835995
## Tempo médio de feedbacks
Costumamos enviar feedbacks em até 03 dias após cada processo.
E-mail para contato em caso de não haver resposta: ana.felauto@agileprocess.com.br
## Labels
#### Alocação
- Remoto
#### Regime
- CLT
#### Nível
- Especialista
",0, java backend engineer specialist na agileprocess nossa empresa a agileprocess é uma startup criada para simplificar o processo logístico tornando o muito mais eficiente do início ao fim junto com pessoas que buscam tornar a logística cada vez mais digital e otimizada a empresa utiliza as melhores tecnologias do mercado em busca de entregar sempre mais e melhor fundada em e com sede em florianópolis a agileprocess está em constante crescimento e busca por criatividade conhecimento pessoas apaixonadas por inovação e tecnologia e o desejo de tornar a logística digital em uma realidade o que fazemos utilizando as melhores tecnologias do mercado o sistema agileproces s otimiza o uso da frota propõe as melhores rotas e sequenciamento de entregas e coletas auxilia cada motorista mostrando o percurso com apoio de gps e faz a comprovação de entregas no exato momento em que forem realizadas hoje mais de milhões de entregas e coletas passam no software da agileprocess por mês presente em mais de cidades pelo brasil descrição da vaga buscamos um a backend engineer specialist que será responsável junto ao nossos squads de desenvolvimento por prover a melhor experiência para nossos clientes através de nossas soluções responsabilidades e atribuições desafiar o status quo e desenvolver soluções inovadoras para problemas complexos desenvolver e manter nossos microserviços de forma ágil aplicando boas práticas de engenharia de software contribuir com o desenvolvimento e arquitetura da plataforma preparando a para um crescimento acelerado construir uma base sólida para o desenvolvimento de novos produtos desenvolver sistemas escaláveis sustentáveis e orientados ao usuário trabalhar em um ambiente que estimula e valoriza a autonomia e a transparência ajudar o crescimento do time de tecnologia e engenharia local remoto estamos localizados em florianópolis santa catarina requisitos experiência e conhecimento profundo com desenvolvimento java experiência e conhecimento em gitflow experiência e conhecimento profundo em docker ter atuado na construção de testes unitários e integrados ter atuado na construção de testes de comportamento bdd conhecimentos em design patterns arquitetura e engenharia de software conhecimentos em gitlab ci conhecimentos em metodologias ágeis de desenvolvimento scrum kanban nosso stack java mysql aws git gitlab benefícios onboarding de boas vindas “all hands” nosso encontro semanal com o ceo dress code seja você mesmo a flexibilidade de horário vr va flex r mês plano de saúde para você e quem você ama plano odontológico totalpass clube de descontos newvalue plr parceria com zenklub e muito mais contratação clt salário r r nível especialista como se candidatar por favor envie um email para ana felauto agileprocess com br ou candidate se pela nossa página de carreiras ou me chame no whatsapp tempo médio de feedbacks costumamos enviar feedbacks em até dias após cada processo e mail para contato em caso de não haver resposta ana felauto agileprocess com br labels alocação remoto regime clt nível especialista ,0
5665,20677434731.0,IssuesEvent,2022-03-10 10:37:03,wazuh/wazuh-qa,https://api.github.com/repos/wazuh/wazuh-qa,opened,Automate the execution of all integration tests in Jenkins,team/qa subteam/qa-thunder type/jenkins-automation,"We want to automate the process of launching all the integration tests and obtain results formatted in a table.
So far we have a pipeline with which we can launch a run to test a specific module or component of Wazuh (FIM, remoted, agentd ...), and it is necessary to launch `n` builds with different parameters to get a complete view.
The idea is to create a new pipeline that automatically launches all the necessary builds, and formats the output of each of them to create an html report with a table showing the results obtained.
This pipeline will be useful for testing releases, as well as testing PR changes before they are merged into stable branches.",1.0,"Automate the execution of all integration tests in Jenkins - We want to automate the process of launching all the integration tests and obtain results formatted in a table.
So far we have a pipeline with which we can launch a run to test a specific module or component of Wazuh (FIM, remoted, agentd ...), and it is necessary to launch `n` builds with different parameters to get a complete view.
The idea is to create a new pipeline that automatically launches all the necessary builds, and formats the output of each of them to create an html report with a table showing the results obtained.
This pipeline will be useful for testing releases, as well as testing PR changes before they are merged into stable branches.",1,automate the execution of all integration tests in jenkins we want to automate the process of launching all the integration tests and obtain results formatted in a table so far we have a pipeline with which we can launch a run to test a specific module or component of wazuh fim remoted agentd and it is necessary to launch n builds with different parameters to get a complete view the idea is to create a new pipeline that automatically launches all the necessary builds and formats the output of each of them to create an html report with a table showing the results obtained this pipeline will be useful for testing releases as well as testing pr changes before they are merged into stable branches ,1
3648,14242672802.0,IssuesEvent,2020-11-19 02:24:26,PastVu/pastvu,https://api.github.com/repos/PastVu/pastvu,closed,Automerge with tagging,Automation CI/CD Priority: Major,"Задача модифицировать скрипт https://github.com/PastVu/pastvu/blob/master/.github/workflows/en-automerge.yml таким образом, чтобы при наличии текущем коммите мастера тега v.A.B.C в ветке `en` создавался тег `vA.B.C-en`.
",1.0,"Automerge with tagging - Задача модифицировать скрипт https://github.com/PastVu/pastvu/blob/master/.github/workflows/en-automerge.yml таким образом, чтобы при наличии текущем коммите мастера тега v.A.B.C в ветке `en` создавался тег `vA.B.C-en`.
",1,automerge with tagging задача модифицировать скрипт таким образом чтобы при наличии текущем коммите мастера тега v a b c в ветке en создавался тег va b c en img width alt image src ,1
739940,25729571306.0,IssuesEvent,2022-12-07 19:10:28,BIDMCDigitalPsychiatry/LAMP-platform,https://api.github.com/repos/BIDMCDigitalPsychiatry/LAMP-platform,closed,Data portal visualization error,bug 1day frontend priority HIGH,"It appears that for any researcher, visualizations in the data portal produce an error.
Bug originally reported here: https://mindlamp.discourse.group/t/cortex-visualizations-show-react-error-310/726
**To Reproduce**
Example steps to reproduce error:
1. Enter LAMP dashboard
2. Enter data portal
3. Enter GUI mode
4. Select researcher
5. Select ""data quality tags""
Output for error:

",1.0,"Data portal visualization error - It appears that for any researcher, visualizations in the data portal produce an error.
Bug originally reported here: https://mindlamp.discourse.group/t/cortex-visualizations-show-react-error-310/726
**To Reproduce**
Example steps to reproduce error:
1. Enter LAMP dashboard
2. Enter data portal
3. Enter GUI mode
4. Select researcher
5. Select ""data quality tags""
Output for error:

",0,data portal visualization error it appears that for any researcher visualizations in the data portal produce an error bug originally reported here to reproduce example steps to reproduce error enter lamp dashboard enter data portal enter gui mode select researcher select data quality tags output for error ,0
3054,13037836403.0,IssuesEvent,2020-07-28 14:22:56,prisma/language-tools,https://api.github.com/repos/prisma/language-tools,closed,Test formatting / binary execution fails,kind/improvement topic: automation,"This might need a reproducible crash of the binary for testing.
_Originally posted by @janpio in https://github.com/prisma/vscode/issues/84#issuecomment-618607640_
An error is shown in the output and a window notification is given including the error details.",1.0,"Test formatting / binary execution fails - This might need a reproducible crash of the binary for testing.
_Originally posted by @janpio in https://github.com/prisma/vscode/issues/84#issuecomment-618607640_
An error is shown in the output and a window notification is given including the error details.",1,test formatting binary execution fails this might need a reproducible crash of the binary for testing originally posted by janpio in an error is shown in the output and a window notification is given including the error details ,1
429169,30028127833.0,IssuesEvent,2023-06-27 07:48:28,Pecneb/computer_vision_research,https://api.github.com/repos/Pecneb/computer_vision_research,closed,Run detection on all bellevue datasets in hourly resolution,documentation,"- [x] Bellevue Newport
- [x] Bellevue Eastgate
- [x] Bellevue NE
- [x] Bellevue SE",1.0,"Run detection on all bellevue datasets in hourly resolution - - [x] Bellevue Newport
- [x] Bellevue Eastgate
- [x] Bellevue NE
- [x] Bellevue SE",0,run detection on all bellevue datasets in hourly resolution bellevue newport bellevue eastgate bellevue ne bellevue se,0
6992,24099219382.0,IssuesEvent,2022-09-19 22:00:15,yugabyte/yugabyte-db,https://api.github.com/repos/yugabyte/yugabyte-db,closed,[YSQL] Failed while visiting tablets in sys catalog: Cannot add a table to a colocation group for tablet: place is taken by a table,kind/bug duplicate area/ysql priority/medium status/awaiting-triage qa_automation,"Jira Link: [DB-3212](https://yugabyte.atlassian.net/browse/DB-3212)
### Description
Issue occured during stress testing TABLEGROUPS with 3/3 runs.
Scenario - spawn 2xlarge VMs 3RF cluster, create 5 tablegroups, spawn 5*10*100 (batches) workload that will load data to tables.
At some point FATAL occurs
```
F20220816 07:22:59 ../../src/yb/master/catalog_manager.cc:1001] T 00000000000000000000000000000000 P c158b26d1df94b5595435d1103627cbb: Failed to load sys catalog: Corruption (yb/master/catalog_loaders.cc:265): Failed while visiting tablets in sys catalog: Cannot add a table 000033e6000030008000000000004048 (ColocationId: 1838765072) to a colocation group for tablet 6a398db1629e43ff9a9c81084514fe59: place is taken by a table 000033e6000030008000000000004048
@ 0x7fc14e985c1b google::LogMessage::SendToLog()
@ 0x7fc14e986cd8 google::LogMessage::Flush()
@ 0x7fc14e98715f google::LogMessageFatal::~LogMessageFatal()
@ 0x7fc151fc3e7e yb::master::CatalogManager::LoadSysCatalogDataTask()
@ 0x7fc14ec29e5c yb::ThreadPool::DispatchThread()
@ 0x7fc14ec252fb yb::Thread::SuperviseThread()
@ 0x7fc14cf9a694 start_thread
@ 0x7fc14c6d741d __clone
```
",1.0,"[YSQL] Failed while visiting tablets in sys catalog: Cannot add a table to a colocation group for tablet: place is taken by a table - Jira Link: [DB-3212](https://yugabyte.atlassian.net/browse/DB-3212)
### Description
Issue occured during stress testing TABLEGROUPS with 3/3 runs.
Scenario - spawn 2xlarge VMs 3RF cluster, create 5 tablegroups, spawn 5*10*100 (batches) workload that will load data to tables.
At some point FATAL occurs
```
F20220816 07:22:59 ../../src/yb/master/catalog_manager.cc:1001] T 00000000000000000000000000000000 P c158b26d1df94b5595435d1103627cbb: Failed to load sys catalog: Corruption (yb/master/catalog_loaders.cc:265): Failed while visiting tablets in sys catalog: Cannot add a table 000033e6000030008000000000004048 (ColocationId: 1838765072) to a colocation group for tablet 6a398db1629e43ff9a9c81084514fe59: place is taken by a table 000033e6000030008000000000004048
@ 0x7fc14e985c1b google::LogMessage::SendToLog()
@ 0x7fc14e986cd8 google::LogMessage::Flush()
@ 0x7fc14e98715f google::LogMessageFatal::~LogMessageFatal()
@ 0x7fc151fc3e7e yb::master::CatalogManager::LoadSysCatalogDataTask()
@ 0x7fc14ec29e5c yb::ThreadPool::DispatchThread()
@ 0x7fc14ec252fb yb::Thread::SuperviseThread()
@ 0x7fc14cf9a694 start_thread
@ 0x7fc14c6d741d __clone
```
",1, failed while visiting tablets in sys catalog cannot add a table to a colocation group for tablet place is taken by a table jira link description issue occured during stress testing tablegroups with runs scenario spawn vms cluster create tablegroups spawn batches workload that will load data to tables at some point fatal occurs src yb master catalog manager cc t p failed to load sys catalog corruption yb master catalog loaders cc failed while visiting tablets in sys catalog cannot add a table colocationid to a colocation group for tablet place is taken by a table google logmessage sendtolog google logmessage flush google logmessagefatal logmessagefatal yb master catalogmanager loadsyscatalogdatatask yb threadpool dispatchthread yb thread supervisethread start thread clone ,1
179422,6625028926.0,IssuesEvent,2017-09-22 14:03:26,mercadopago/px-ios,https://api.github.com/repos/mercadopago/px-ios,opened,No se muestran los mensajes correctos en exclusiones,Priority: Medium,"### Comportamiento Esperado
Cuando elijo un tipo de tarjeta (crédito, débito o prepaga), si para ese tipo de tarjeta solo hay 1 medio de pago deberia mostrar el disclaimer de que solo se acepta ese medio de pago, si hay mas de uno, e intento completar con un medio de pago que no es el elegido o que no es soportado al tocar el botón de ""Mas Info"" deberia listar los medios de pagos soportados dentro de la categoria de tarjeta elegida
",1.0,"No se muestran los mensajes correctos en exclusiones - ### Comportamiento Esperado
Cuando elijo un tipo de tarjeta (crédito, débito o prepaga), si para ese tipo de tarjeta solo hay 1 medio de pago deberia mostrar el disclaimer de que solo se acepta ese medio de pago, si hay mas de uno, e intento completar con un medio de pago que no es el elegido o que no es soportado al tocar el botón de ""Mas Info"" deberia listar los medios de pagos soportados dentro de la categoria de tarjeta elegida
",0,no se muestran los mensajes correctos en exclusiones comportamiento esperado cuando elijo un tipo de tarjeta crédito débito o prepaga si para ese tipo de tarjeta solo hay medio de pago deberia mostrar el disclaimer de que solo se acepta ese medio de pago si hay mas de uno e intento completar con un medio de pago que no es el elegido o que no es soportado al tocar el botón de mas info deberia listar los medios de pagos soportados dentro de la categoria de tarjeta elegida ,0
328160,9990349744.0,IssuesEvent,2019-07-11 08:38:25,webcompat/web-bugs,https://api.github.com/repos/webcompat/web-bugs,closed,smallbusiness.chron.com - see bug description,browser-firefox-mobile engine-gecko priority-important,"
**URL**: https://smallbusiness.chron.com/rename-dual-boot-windows-start-up-63870.html
**Browser / Version**: Firefox Mobile 67.0
**Operating System**: Android
**Tested Another Browser**: No
**Problem type**: Something else
**Description**: Video autoplays with autoplay blocked
**Steps to Reproduce**:
Immediately upon loading the page to read an article, an unrelated video started autoplaying without any prompt. I don't want this video wasting my battery, my bandwidth, or my time.
[](https://webcompat.com/uploads/2019/7/787ef557-678e-4acc-8a17-7de999cbcb7e.jpeg)
Browser Configuration
mixed active content blocked: false
image.mem.shared: true
buildID: 20190622041859
tracking content blocked: false
gfx.webrender.blob-images: true
hasTouchScreen: true
mixed passive content blocked: false
gfx.webrender.enabled: false
gfx.webrender.all: false
channel: default
Console Messages:
[u'[JavaScript Warning: ""The resource at https://pagead2.googlesyndication.com/pagead/js/adsbygoogle.js was blocked because content blocking is enabled."" {file: ""https://smallbusiness.chron.com/rename-dual-boot-windows-start-up-63870.html"" line: 0}]', u'[JavaScript Warning: ""The resource at https://cdn.taboola.com/libtrc/hearstlocalnews-chronmobile/loader.js was blocked because content blocking is enabled."" {file: ""https://smallbusiness.chron.com/rename-dual-boot-windows-start-up-63870.html"" line: 0}]', u'[JavaScript Warning: ""The resource at https://pagead2.googlesyndication.com/pagead/js/adsbygoogle.js was blocked because content blocking is enabled."" {file: ""https://smallbusiness.chron.com/rename-dual-boot-windows-start-up-63870.html"" line: 0}]', u'[JavaScript Warning: ""The resource at https://pagead2.googlesyndication.com/pagead/js/adsbygoogle.js was blocked because content blocking is enabled."" {file: ""https://smallbusiness.chron.com/rename-dual-boot-windows-start-up-63870.html"" line: 0}]', u'[JavaScript Warning: ""The resource at https://pagead2.googlesyndication.com/pagead/js/adsbygoogle.js was blocked because content blocking is enabled."" {file: ""https://smallbusiness.chron.com/rename-dual-boot-windows-start-up-63870.html"" line: 0}]', u'[JavaScript Warning: ""The resource at https://nexus.ensighten.com/hearst/news-3p/Bootstrap.js was blocked because content blocking is enabled."" {file: ""https://smallbusiness.chron.com/rename-dual-boot-windows-start-up-63870.html"" line: 0}]', u'[JavaScript Warning: ""Loading failed for the
`);
}
else
res.end();
})
.listen(4100);
```
#### Provide the test code and the tested page URL (if applicable)
Test code
```js
import { Role, ClientFunction, Selector } from 'testcafe';
fixture `Test authentication`
.page `http://localhost:4100/`;
const role = Role(`http://localhost:4100/#login`, async t => await t.click('input'), { preserveUrl: true });
test('first login', async t => {
await t
.wait(3000)
.useRole(role)
.expect(Selector('h1').innerText).eql('Authorized');
});
test('second login', async t => {
await t
.wait(3000)
.useRole(role)
.expect(Selector('h1').innerText).eql('Authorized');
});
```
### Workaround
```js
import { Role, ClientFunction, Selector } from 'testcafe';
fixture `Test authentication`
.page `http://localhost:4100/`;
const role = Role(`http://localhost:4100/#login`, async t => await t.click('input'), { preserveUrl: true });
const reloadPage = new ClientFunction(() => location.reload(true));
const fixedUseRole = async (t, role) => {
await t.useRole(role);
await reloadPage();
};
test('first login', async t => {
await t.wait(3000)
await fixedUseRole(t, role);
await t.expect(Selector('h1').innerText).eql('Authorized');
});
test('second login', async t => {
await t.wait(3000)
await fixedUseRole(t, role);
await t.expect(Selector('h1').innerText).eql('Authorized');
});
```
### Specify your
* testcafe version: 0.19.0",1.0,"Role doesn't work when page navigation doesn't trigger page reloading - ### Are you requesting a feature or reporting a bug?
bug
### What is the current behavior?
Role doesn't work after first-time initialization. `Cookie`, `localStorage` and `sessionStorage` should be restored when a preserved page is loaded but the page changes only the hash and it isn't reloaded after navigating.
### What is the expected behavior?
Page must be reloaded after the `useRole` function call.
### How would you reproduce the current behavior (if this is a bug)?
Node server:
```js
const http = require('http');
http
.createServer((req, res) => {
if (req.url === '/') {
res.writeHead(200, { 'content-type': 'text/html' });
res.end(`
log in
`);
}
else
res.end();
})
.listen(4100);
```
#### Provide the test code and the tested page URL (if applicable)
Test code
```js
import { Role, ClientFunction, Selector } from 'testcafe';
fixture `Test authentication`
.page `http://localhost:4100/`;
const role = Role(`http://localhost:4100/#login`, async t => await t.click('input'), { preserveUrl: true });
test('first login', async t => {
await t
.wait(3000)
.useRole(role)
.expect(Selector('h1').innerText).eql('Authorized');
});
test('second login', async t => {
await t
.wait(3000)
.useRole(role)
.expect(Selector('h1').innerText).eql('Authorized');
});
```
### Workaround
```js
import { Role, ClientFunction, Selector } from 'testcafe';
fixture `Test authentication`
.page `http://localhost:4100/`;
const role = Role(`http://localhost:4100/#login`, async t => await t.click('input'), { preserveUrl: true });
const reloadPage = new ClientFunction(() => location.reload(true));
const fixedUseRole = async (t, role) => {
await t.useRole(role);
await reloadPage();
};
test('first login', async t => {
await t.wait(3000)
await fixedUseRole(t, role);
await t.expect(Selector('h1').innerText).eql('Authorized');
});
test('second login', async t => {
await t.wait(3000)
await fixedUseRole(t, role);
await t.expect(Selector('h1').innerText).eql('Authorized');
});
```
### Specify your
* testcafe version: 0.19.0",1,role doesn t work when page navigation doesn t trigger page reloading are you requesting a feature or reporting a bug bug what is the current behavior role doesn t work after first time initialization cookie localstorage and sessionstorage should be restored when a preserved page is loaded but the page changes only the hash and it isn t reloaded after navigating what is the expected behavior page must be reloaded after the userole function call how would you reproduce the current behavior if this is a bug node server js const http require http http createserver req res if req url res writehead content type text html res end log in var onhashchange function var newhash location hash if newhash if localstorage getitem isloggedin header textcontent authorized header style display block anchor style display none button style display none else header textcontent unauthorized anchor style display block button style display none else if newhash login if localstorage getitem isloggedin return location hash header style display none anchor style display none button style display block button addeventlistener click function localstorage setitem isloggedin true location hash onhashchange window addeventlistener hashchange onhashchange else res end listen provide the test code and the tested page url if applicable test code js import role clientfunction selector from testcafe fixture test authentication page const role role async t await t click input preserveurl true test first login async t await t wait userole role expect selector innertext eql authorized test second login async t await t wait userole role expect selector innertext eql authorized workaround js import role clientfunction selector from testcafe fixture test authentication page const role role async t await t click input preserveurl true const reloadpage new clientfunction location reload true const fixeduserole async t role await t userole role await reloadpage test first login async t await t wait await fixeduserole t role await t expect selector innertext eql authorized test second login async t await t wait await fixeduserole t role await t expect selector innertext eql authorized specify your testcafe version ,1
169943,20841989809.0,IssuesEvent,2022-03-21 02:02:07,michaeldotson/mini-capstone-vue-app,https://api.github.com/repos/michaeldotson/mini-capstone-vue-app,opened,CVE-2022-24772 (High) detected in node-forge-0.7.5.tgz,security vulnerability,"## CVE-2022-24772 - High Severity Vulnerability
Vulnerable Library - node-forge-0.7.5.tgz
JavaScript implementations of network transports, cryptography, ciphers, PKI, message digests, and various utilities.
Forge (also called `node-forge`) is a native implementation of Transport Layer Security in JavaScript. Prior to version 1.3.0, RSA PKCS#1 v1.5 signature verification code does not check for tailing garbage bytes after decoding a `DigestInfo` ASN.1 structure. This can allow padding bytes to be removed and garbage data added to forge a signature when a low public exponent is being used. The issue has been addressed in `node-forge` version 1.3.0. There are currently no known workarounds.
***
Step up your Open Source Security Game with WhiteSource [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)",True,"CVE-2022-24772 (High) detected in node-forge-0.7.5.tgz - ## CVE-2022-24772 - High Severity Vulnerability
Vulnerable Library - node-forge-0.7.5.tgz
JavaScript implementations of network transports, cryptography, ciphers, PKI, message digests, and various utilities.
Forge (also called `node-forge`) is a native implementation of Transport Layer Security in JavaScript. Prior to version 1.3.0, RSA PKCS#1 v1.5 signature verification code does not check for tailing garbage bytes after decoding a `DigestInfo` ASN.1 structure. This can allow padding bytes to be removed and garbage data added to forge a signature when a low public exponent is being used. The issue has been addressed in `node-forge` version 1.3.0. There are currently no known workarounds.
***
Step up your Open Source Security Game with WhiteSource [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)",0,cve high detected in node forge tgz cve high severity vulnerability vulnerable library node forge tgz javascript implementations of network transports cryptography ciphers pki message digests and various utilities library home page a href path to dependency file mini capstone vue app package json path to vulnerable library node modules node forge package json dependency hierarchy cli service tgz root library webpack dev server tgz selfsigned tgz x node forge tgz vulnerable library vulnerability details forge also called node forge is a native implementation of transport layer security in javascript prior to version rsa pkcs signature verification code does not check for tailing garbage bytes after decoding a digestinfo asn structure this can allow padding bytes to be removed and garbage data added to forge a signature when a low public exponent is being used the issue has been addressed in node forge version there are currently no known workarounds publish date url a href cvss score details base score metrics exploitability metrics attack vector network attack complexity low privileges required none user interaction none scope unchanged impact metrics confidentiality impact none integrity impact high availability impact none for more information on scores click a href suggested fix type upgrade version origin a href release date fix resolution node forge step up your open source security game with whitesource ,0
1622,10469215037.0,IssuesEvent,2019-09-22 19:08:08,a-t-0/Productivity-phone,https://api.github.com/repos/a-t-0/Productivity-phone,opened,Sync multiple calenders with davdroid at once,Automation,"Currently, setting up the syncing with the (google) calendars is performed poorly automated by emulating human touch programatically on the phone from a pc.
Davdroid has (random) request for donating, which disables the control/click flow, which leads to a async between commands given from pc and input required on phone.
To solve, modify the davdroid so that it just asks the entire groups containing lists of calendar urls at once (copy pastable with single press, or enterable via api), and asking the username and password only once per group (or reading the username from the input).",1.0,"Sync multiple calenders with davdroid at once - Currently, setting up the syncing with the (google) calendars is performed poorly automated by emulating human touch programatically on the phone from a pc.
Davdroid has (random) request for donating, which disables the control/click flow, which leads to a async between commands given from pc and input required on phone.
To solve, modify the davdroid so that it just asks the entire groups containing lists of calendar urls at once (copy pastable with single press, or enterable via api), and asking the username and password only once per group (or reading the username from the input).",1,sync multiple calenders with davdroid at once currently setting up the syncing with the google calendars is performed poorly automated by emulating human touch programatically on the phone from a pc davdroid has random request for donating which disables the control click flow which leads to a async between commands given from pc and input required on phone to solve modify the davdroid so that it just asks the entire groups containing lists of calendar urls at once copy pastable with single press or enterable via api and asking the username and password only once per group or reading the username from the input ,1
8151,26282565131.0,IssuesEvent,2023-01-07 13:18:52,ita-social-projects/TeachUA,https://api.github.com/repos/ita-social-projects/TeachUA,closed,[Advanced search] Different spelling of center title 'Школа мистецтв імені Миколи Дмитровича Леонтовича',bug Backend Priority: Low Automation,"**Environment:** Windows 11, Google Chrome Version 107.0.5304.107 (Official Build) (64-bit).
**Reproducible:** always.
**Build found:** last commit [7652f37](https://github.com/ita-social-projects/TeachUA/commit/7652f37a2d6de58fe02b06fb38c91acef4b623c7)
**Preconditions**
1. Go to the webpage: https://speak-ukrainian.org.ua/dev/
2. Go to 'Гуртки' tab.
3. Click on 'Розширений пошук' button.
**Steps to reproduce**
1. Click on 'Центр' radio button.
2. Make sure that 'Київ' city is selected (if not, select it).
3. Set 'Район міста' as 'Деснянський'.
4. Click on the center with title 'Школа мистецтв імені Миколи Дмитровича Леонтовича'.
5. Pay attention to the spelling of that title.
6. Go to a database.
7. Execute the following query:
SELECT DISTINCT c.name
FROM centers as c
INNER JOIN locations as l ON c.id=l.center_id
INNER JOIN cities as ct ON l.city_id=ct.id
INNER JOIN districts as ds ON l.district_id=ds.id
WHERE ct.name = 'Київ'
AND ds.name = 'Деснянський';
8. Double-click on the center title 'Школа мистецтв імені Миколи Дмитровича Леонтовича'.
**Actual result**
There are two spaces between the words 'Дмитровича' and 'Леонтовича' on DB.
UI:

DB:

**Expected result**
Center with title 'Школа мистецтв імені Миколи Дмитровича Леонтовича' should have spelled the same on UI and DB (with one space between the words 'Дмитровича' and 'Леонтовича').
**User story and test case links**
User story #274
[Test case](https://jira.softserve.academy/browse/TUA-455)
**Labels to be added**
""Bug"", Priority (""pri: "").
",1.0,"[Advanced search] Different spelling of center title 'Школа мистецтв імені Миколи Дмитровича Леонтовича' - **Environment:** Windows 11, Google Chrome Version 107.0.5304.107 (Official Build) (64-bit).
**Reproducible:** always.
**Build found:** last commit [7652f37](https://github.com/ita-social-projects/TeachUA/commit/7652f37a2d6de58fe02b06fb38c91acef4b623c7)
**Preconditions**
1. Go to the webpage: https://speak-ukrainian.org.ua/dev/
2. Go to 'Гуртки' tab.
3. Click on 'Розширений пошук' button.
**Steps to reproduce**
1. Click on 'Центр' radio button.
2. Make sure that 'Київ' city is selected (if not, select it).
3. Set 'Район міста' as 'Деснянський'.
4. Click on the center with title 'Школа мистецтв імені Миколи Дмитровича Леонтовича'.
5. Pay attention to the spelling of that title.
6. Go to a database.
7. Execute the following query:
SELECT DISTINCT c.name
FROM centers as c
INNER JOIN locations as l ON c.id=l.center_id
INNER JOIN cities as ct ON l.city_id=ct.id
INNER JOIN districts as ds ON l.district_id=ds.id
WHERE ct.name = 'Київ'
AND ds.name = 'Деснянський';
8. Double-click on the center title 'Школа мистецтв імені Миколи Дмитровича Леонтовича'.
**Actual result**
There are two spaces between the words 'Дмитровича' and 'Леонтовича' on DB.
UI:

DB:

**Expected result**
Center with title 'Школа мистецтв імені Миколи Дмитровича Леонтовича' should have spelled the same on UI and DB (with one space between the words 'Дмитровича' and 'Леонтовича').
**User story and test case links**
User story #274
[Test case](https://jira.softserve.academy/browse/TUA-455)
**Labels to be added**
""Bug"", Priority (""pri: "").
",1, different spelling of center title школа мистецтв імені миколи дмитровича леонтовича environment windows google chrome version official build bit reproducible always build found last commit preconditions go to the webpage go to гуртки tab click on розширений пошук button steps to reproduce click on центр radio button make sure that київ city is selected if not select it set район міста as деснянський click on the center with title школа мистецтв імені миколи дмитровича леонтовича pay attention to the spelling of that title go to a database execute the following query select distinct c name from centers as c inner join locations as l on c id l center id inner join cities as ct on l city id ct id inner join districts as ds on l district id ds id where ct name київ and ds name деснянський double click on the center title школа мистецтв імені миколи дмитровича леонтовича actual result there are two spaces between the words дмитровича and леонтовича on db ui db expected result center with title школа мистецтв імені миколи дмитровича леонтовича should have spelled the same on ui and db with one space between the words дмитровича and леонтовича user story and test case links user story labels to be added bug priority pri ,1
1926,11103549389.0,IssuesEvent,2019-12-17 04:21:03,bandprotocol/d3n,https://api.github.com/repos/bandprotocol/d3n,closed,Run EVM bridge automated test for every PR / commit,automation bridge chore,Setup CI to run tests for every push that affects `bridge/evm` directory.,1.0,Run EVM bridge automated test for every PR / commit - Setup CI to run tests for every push that affects `bridge/evm` directory.,1,run evm bridge automated test for every pr commit setup ci to run tests for every push that affects bridge evm directory ,1
214628,16568902759.0,IssuesEvent,2021-05-30 01:43:21,SHOPFIFTEEN/FIFTEEN_FRONT,https://api.github.com/repos/SHOPFIFTEEN/FIFTEEN_FRONT,opened,1-34. 상품관리-등록 관한 Issue,bug documentation,"오류를 재연하기 위해 필요한 조치 (즉, 어떻게 하여 오류를 발견하였나)
관리자 화면에서 상품 등록 시 배송비, 할인율 등 숫자로 입력해야 하는 부분을 글자로 등록 시도
예상했던 동작이나 결과
해당 부분을 어떤 방식으로 수정해야 한다는 경고창 표시
실제 나타난 동작이나 결과
경고창 표시 없이 등록만 불가(어느 부분이 오류인지 확인 불가)
가능한 경우 오류 수정을 위한 제안.
어느 부분이 오류인지 경고창으로 표시할 수 있도록 수정 필요",1.0,"1-34. 상품관리-등록 관한 Issue - 오류를 재연하기 위해 필요한 조치 (즉, 어떻게 하여 오류를 발견하였나)
관리자 화면에서 상품 등록 시 배송비, 할인율 등 숫자로 입력해야 하는 부분을 글자로 등록 시도
예상했던 동작이나 결과
해당 부분을 어떤 방식으로 수정해야 한다는 경고창 표시
실제 나타난 동작이나 결과
경고창 표시 없이 등록만 불가(어느 부분이 오류인지 확인 불가)
가능한 경우 오류 수정을 위한 제안.
어느 부분이 오류인지 경고창으로 표시할 수 있도록 수정 필요",0, 상품관리 등록 관한 issue 오류를 재연하기 위해 필요한 조치 즉 어떻게 하여 오류를 발견하였나 관리자 화면에서 상품 등록 시 배송비 할인율 등 숫자로 입력해야 하는 부분을 글자로 등록 시도 예상했던 동작이나 결과 해당 부분을 어떤 방식으로 수정해야 한다는 경고창 표시 실제 나타난 동작이나 결과 경고창 표시 없이 등록만 불가 어느 부분이 오류인지 확인 불가 가능한 경우 오류 수정을 위한 제안 어느 부분이 오류인지 경고창으로 표시할 수 있도록 수정 필요,0
7469,24946537777.0,IssuesEvent,2022-11-01 01:04:36,dannytsang/homeassistant-config,https://api.github.com/repos/dannytsang/homeassistant-config,opened, Change automations to ⌛timers,automations integration: smartthings,"Similar to #63 but for all other automations that use the ""for"" parameter in automation triggers.
Checklist:
- [ ] Bedroom heated blankets
- [ ] Fans
- [ ] Switches",1.0," Change automations to ⌛timers - Similar to #63 but for all other automations that use the ""for"" parameter in automation triggers.
Checklist:
- [ ] Bedroom heated blankets
- [ ] Fans
- [ ] Switches",1, change automations to ⌛timers similar to but for all other automations that use the for parameter in automation triggers checklist bedroom heated blankets fans switches,1
404,6229997195.0,IssuesEvent,2017-07-11 06:35:23,VP-Technologies/assistant-server,https://api.github.com/repos/VP-Technologies/assistant-server,opened,Implement Creation of Devices DB,automation,"Follow the spec from the doc, and creating a test script with fake devices.",1.0,"Implement Creation of Devices DB - Follow the spec from the doc, and creating a test script with fake devices.",1,implement creation of devices db follow the spec from the doc and creating a test script with fake devices ,1
5978,21781161857.0,IssuesEvent,2022-05-13 19:05:29,dotnet/arcade,https://api.github.com/repos/dotnet/arcade,closed,CG work for dotnet-helix-machines,First Responder Detected By - Automation Helix-Machines Operations,"To drive our CG alert to Zero, please address the following items.
https://dnceng.visualstudio.com/internal/_componentGovernance/dotnet-helix-machines?_a=alerts&typeId=6377838&alerts-view-option=active
We need to address anything with a medium (or higher) priority",1.0,"CG work for dotnet-helix-machines - To drive our CG alert to Zero, please address the following items.
https://dnceng.visualstudio.com/internal/_componentGovernance/dotnet-helix-machines?_a=alerts&typeId=6377838&alerts-view-option=active
We need to address anything with a medium (or higher) priority",1,cg work for dotnet helix machines to drive our cg alert to zero please address the following items we need to address anything with a medium or higher priority,1
4588,16961498009.0,IssuesEvent,2021-06-29 04:59:42,ecotiya/wicum,https://api.github.com/repos/ecotiya/wicum,opened,自動ビルド及び自動テストの導入,automation,"# 【機能要件】
・自動ビルド及び自動テストの導入。
・とりあえずAWSのデプロイまで完了したあとに余裕があったらやる。
# 【タスク】
- [ ] task1
- [ ] task2
- [ ] task3
# 【調査事項】
",1.0,"自動ビルド及び自動テストの導入 - # 【機能要件】
・自動ビルド及び自動テストの導入。
・とりあえずAWSのデプロイまで完了したあとに余裕があったらやる。
# 【タスク】
- [ ] task1
- [ ] task2
- [ ] task3
# 【調査事項】
",1,自動ビルド及び自動テストの導入 【機能要件】 ・自動ビルド及び自動テストの導入。 ・とりあえずawsのデプロイまで完了したあとに余裕があったらやる。 【タスク】 【調査事項】 ,1
2392,11862563563.0,IssuesEvent,2020-03-25 18:09:16,elastic/metricbeat-tests-poc,https://api.github.com/repos/elastic/metricbeat-tests-poc,closed,Validate Helm charts,automation,"Let's use a BDD approach to validate the official Helm charts for elastic. Something like this:
```gherkin
@helm
@k8s
@metricbeat
Feature: The Helm chart is following product recommended configuration for Kubernetes
Scenario: The Metricbeat chart will create recommended K8S resources
Given a cluster is running
When the ""metricbeat"" Elastic's helm chart is installed
Then a pod will be deployed on each node of the cluster by a DaemonSet
And a ""Deployment"" will manage additional pods for metricsets querying internal services
And a ""kube-state-metrics"" chart will retrieve specific Kubernetes metrics
And a ""ConfigMap"" resource contains the ""metricbeat.yml"" content
And a ""ConfigMap"" resource contains the ""kube-state-metrics-metricbeat.yml"" content
And a ""ServiceAccount"" resource manages RBAC
And a ""ClusterRole"" resource manages RBAC
And a ""ClusterRoleBinding"" resource manages RBAC
```",1.0,"Validate Helm charts - Let's use a BDD approach to validate the official Helm charts for elastic. Something like this:
```gherkin
@helm
@k8s
@metricbeat
Feature: The Helm chart is following product recommended configuration for Kubernetes
Scenario: The Metricbeat chart will create recommended K8S resources
Given a cluster is running
When the ""metricbeat"" Elastic's helm chart is installed
Then a pod will be deployed on each node of the cluster by a DaemonSet
And a ""Deployment"" will manage additional pods for metricsets querying internal services
And a ""kube-state-metrics"" chart will retrieve specific Kubernetes metrics
And a ""ConfigMap"" resource contains the ""metricbeat.yml"" content
And a ""ConfigMap"" resource contains the ""kube-state-metrics-metricbeat.yml"" content
And a ""ServiceAccount"" resource manages RBAC
And a ""ClusterRole"" resource manages RBAC
And a ""ClusterRoleBinding"" resource manages RBAC
```",1,validate helm charts let s use a bdd approach to validate the official helm charts for elastic something like this gherkin helm metricbeat feature the helm chart is following product recommended configuration for kubernetes scenario the metricbeat chart will create recommended resources given a cluster is running when the metricbeat elastic s helm chart is installed then a pod will be deployed on each node of the cluster by a daemonset and a deployment will manage additional pods for metricsets querying internal services and a kube state metrics chart will retrieve specific kubernetes metrics and a configmap resource contains the metricbeat yml content and a configmap resource contains the kube state metrics metricbeat yml content and a serviceaccount resource manages rbac and a clusterrole resource manages rbac and a clusterrolebinding resource manages rbac ,1
62496,6798459654.0,IssuesEvent,2017-11-02 05:45:28,minishift/minishift,https://api.github.com/repos/minishift/minishift,closed,make integration failed for cmd-openshift feature,component/integration-test kind/bug priority/major status/needs-info,"```
$ make integration GODOG_OPTS=""-tags cmd-openshift -format pretty""
go install -pkgdir=/home/amit/go/src/github.com/minishift/minishift/out/bindata -ldflags=""-X github.com/minishift/minishift/pkg/version.minishiftVersion=1.5.0 -X github.com/minishift/minishift/pkg/version.b2dIsoVersion=v1.1.0 -X github.com/minishift/minishift/pkg/version.centOsIsoVersion=v1.1.0 -X github.com/minishift/minishift/pkg/version.openshiftVersion=v3.6.0 -X github.com/minishift/minishift/pkg/version.commitSha=0e75c4ec"" ./cmd/minishift
mkdir -p /home/amit/go/src/github.com/minishift/minishift/out/integration-test
go test -timeout 3600s github.com/minishift/minishift/test/integration --tags=integration -v -args --test-dir /home/amit/go/src/github.com/minishift/minishift/out/integration-test --binary /home/amit/go/bin/minishift -tags cmd-openshift -format pretty
Test run using Boot2Docker iso image.
Keeping Minishift cache directory '/home/amit/go/src/github.com/minishift/minishift/out/integration-test/cache' for test run.
Log successfully started, logging into: /home/amit/go/src/github.com/minishift/minishift/out/integration-test/integration.log
Running Integration test in: /home/amit/go/src/github.com/minishift/minishift/out/integration-test
Using binary: /home/amit/go/bin/minishift
Feature: Basic
As a user I can perform basic operations of Minishift and OpenShift
Feature: Openshift commands
Commands ""minishift openshift [sub-command]"" are used for interaction with Openshift
cluster in VM provided by Minishift.
.
.
.
Scenario: Getting existing service without route # features/cmd-openshift.feature:67
When executing ""minishift openshift service nodejs-ex"" succeeds # integration_test.go:652 -> github.com/minishift/minishift/test/integration.executingMinishiftCommandSucceedsOrFails
Then stdout should contain ""nodejs-ex"" # integration_test.go:594 -> github.com/minishift/minishift/test/integration.commandReturnShouldContain
Output did not match. Expected: 'nodejs-ex', Actual: '|-----------|------|----------|-----------|--------|
| NAMESPACE | NAME | NODEPORT | ROUTE-URL | WEIGHT |
|-----------|------|----------|-----------|--------|
|-----------|------|----------|-----------|--------|
'
And stdout should not match
""""""
^http:\/\/nodejs-ex-myproject\.[0-9]{1,3}\.[0-9]{1,3}\.[0-9]{1,3}\.[0-9]{1,3}\.nip\.io
""""""
Scenario: Getting existing service with route # features/cmd-openshift.feature:100
When executing ""minishift openshift service nodejs-ex"" succeeds # integration_test.go:652 -> github.com/minishift/minishift/test/integration.executingMinishiftCommandSucceedsOrFails
Then stdout should contain ""nodejs-ex"" # integration_test.go:594 -> github.com/minishift/minishift/test/integration.commandReturnShouldContain
Output did not match. Expected: 'nodejs-ex', Actual: '|-----------|------|----------|-----------|--------|
| NAMESPACE | NAME | NODEPORT | ROUTE-URL | WEIGHT |
|-----------|------|----------|-----------|--------|
|-----------|------|----------|-----------|--------|
'
And stdout should match
""""""
http:\/\/nodejs-ex-myproject\.[0-9]{1,3}\.[0-9]{1,3}\.[0-9]{1,3}\.[0-9]{1,3}\.nip\.io
""""""
.
.
.
--- Failed scenarios:
features/cmd-openshift.feature:39
features/cmd-openshift.feature:69
features/cmd-openshift.feature:93
features/cmd-openshift.feature:102
features/cmd-openshift.feature:110
18 scenarios (13 passed, 5 failed)
48 steps (38 passed, 5 failed, 5 skipped)
2m22.19907923s
testing: warning: no tests to run
PASS
exit status 1
FAIL github.com/minishift/minishift/test/integration 142.214s
make: *** [Makefile:176: integration] Error 1
18:17 $
```",1.0,"make integration failed for cmd-openshift feature - ```
$ make integration GODOG_OPTS=""-tags cmd-openshift -format pretty""
go install -pkgdir=/home/amit/go/src/github.com/minishift/minishift/out/bindata -ldflags=""-X github.com/minishift/minishift/pkg/version.minishiftVersion=1.5.0 -X github.com/minishift/minishift/pkg/version.b2dIsoVersion=v1.1.0 -X github.com/minishift/minishift/pkg/version.centOsIsoVersion=v1.1.0 -X github.com/minishift/minishift/pkg/version.openshiftVersion=v3.6.0 -X github.com/minishift/minishift/pkg/version.commitSha=0e75c4ec"" ./cmd/minishift
mkdir -p /home/amit/go/src/github.com/minishift/minishift/out/integration-test
go test -timeout 3600s github.com/minishift/minishift/test/integration --tags=integration -v -args --test-dir /home/amit/go/src/github.com/minishift/minishift/out/integration-test --binary /home/amit/go/bin/minishift -tags cmd-openshift -format pretty
Test run using Boot2Docker iso image.
Keeping Minishift cache directory '/home/amit/go/src/github.com/minishift/minishift/out/integration-test/cache' for test run.
Log successfully started, logging into: /home/amit/go/src/github.com/minishift/minishift/out/integration-test/integration.log
Running Integration test in: /home/amit/go/src/github.com/minishift/minishift/out/integration-test
Using binary: /home/amit/go/bin/minishift
Feature: Basic
As a user I can perform basic operations of Minishift and OpenShift
Feature: Openshift commands
Commands ""minishift openshift [sub-command]"" are used for interaction with Openshift
cluster in VM provided by Minishift.
.
.
.
Scenario: Getting existing service without route # features/cmd-openshift.feature:67
When executing ""minishift openshift service nodejs-ex"" succeeds # integration_test.go:652 -> github.com/minishift/minishift/test/integration.executingMinishiftCommandSucceedsOrFails
Then stdout should contain ""nodejs-ex"" # integration_test.go:594 -> github.com/minishift/minishift/test/integration.commandReturnShouldContain
Output did not match. Expected: 'nodejs-ex', Actual: '|-----------|------|----------|-----------|--------|
| NAMESPACE | NAME | NODEPORT | ROUTE-URL | WEIGHT |
|-----------|------|----------|-----------|--------|
|-----------|------|----------|-----------|--------|
'
And stdout should not match
""""""
^http:\/\/nodejs-ex-myproject\.[0-9]{1,3}\.[0-9]{1,3}\.[0-9]{1,3}\.[0-9]{1,3}\.nip\.io
""""""
Scenario: Getting existing service with route # features/cmd-openshift.feature:100
When executing ""minishift openshift service nodejs-ex"" succeeds # integration_test.go:652 -> github.com/minishift/minishift/test/integration.executingMinishiftCommandSucceedsOrFails
Then stdout should contain ""nodejs-ex"" # integration_test.go:594 -> github.com/minishift/minishift/test/integration.commandReturnShouldContain
Output did not match. Expected: 'nodejs-ex', Actual: '|-----------|------|----------|-----------|--------|
| NAMESPACE | NAME | NODEPORT | ROUTE-URL | WEIGHT |
|-----------|------|----------|-----------|--------|
|-----------|------|----------|-----------|--------|
'
And stdout should match
""""""
http:\/\/nodejs-ex-myproject\.[0-9]{1,3}\.[0-9]{1,3}\.[0-9]{1,3}\.[0-9]{1,3}\.nip\.io
""""""
.
.
.
--- Failed scenarios:
features/cmd-openshift.feature:39
features/cmd-openshift.feature:69
features/cmd-openshift.feature:93
features/cmd-openshift.feature:102
features/cmd-openshift.feature:110
18 scenarios (13 passed, 5 failed)
48 steps (38 passed, 5 failed, 5 skipped)
2m22.19907923s
testing: warning: no tests to run
PASS
exit status 1
FAIL github.com/minishift/minishift/test/integration 142.214s
make: *** [Makefile:176: integration] Error 1
18:17 $
```",0,make integration failed for cmd openshift feature make integration godog opts tags cmd openshift format pretty go install pkgdir home amit go src github com minishift minishift out bindata ldflags x github com minishift minishift pkg version minishiftversion x github com minishift minishift pkg version x github com minishift minishift pkg version centosisoversion x github com minishift minishift pkg version openshiftversion x github com minishift minishift pkg version commitsha cmd minishift mkdir p home amit go src github com minishift minishift out integration test go test timeout github com minishift minishift test integration tags integration v args test dir home amit go src github com minishift minishift out integration test binary home amit go bin minishift tags cmd openshift format pretty test run using iso image keeping minishift cache directory home amit go src github com minishift minishift out integration test cache for test run log successfully started logging into home amit go src github com minishift minishift out integration test integration log running integration test in home amit go src github com minishift minishift out integration test using binary home amit go bin minishift feature basic as a user i can perform basic operations of minishift and openshift feature openshift commands commands minishift openshift are used for interaction with openshift cluster in vm provided by minishift scenario getting existing service without route features cmd openshift feature when executing minishift openshift service nodejs ex succeeds integration test go github com minishift minishift test integration executingminishiftcommandsucceedsorfails then stdout should contain nodejs ex integration test go github com minishift minishift test integration commandreturnshouldcontain output did not match expected nodejs ex actual namespace name nodeport route url weight and stdout should not match http nodejs ex myproject nip io scenario getting existing service with route features cmd openshift feature when executing minishift openshift service nodejs ex succeeds integration test go github com minishift minishift test integration executingminishiftcommandsucceedsorfails then stdout should contain nodejs ex integration test go github com minishift minishift test integration commandreturnshouldcontain output did not match expected nodejs ex actual namespace name nodeport route url weight and stdout should match http nodejs ex myproject nip io failed scenarios features cmd openshift feature features cmd openshift feature features cmd openshift feature features cmd openshift feature features cmd openshift feature scenarios passed failed steps passed failed skipped testing warning no tests to run pass exit status fail github com minishift minishift test integration make error ,0
6040,21940581337.0,IssuesEvent,2022-05-23 17:39:12,pharmaverse/admiral,https://api.github.com/repos/pharmaverse/admiral,closed,Create workflow to automatically create man files,automation,"The workflow should be triggered whenever something is pushed to `devel` or `master`, run `devtools::document()` and commited any updated file in the `man` folder.",1.0,"Create workflow to automatically create man files - The workflow should be triggered whenever something is pushed to `devel` or `master`, run `devtools::document()` and commited any updated file in the `man` folder.",1,create workflow to automatically create man files the workflow should be triggered whenever something is pushed to devel or master run devtools document and commited any updated file in the man folder ,1
137052,11097825807.0,IssuesEvent,2019-12-16 14:07:13,zeebe-io/zeebe,https://api.github.com/repos/zeebe-io/zeebe,closed,LogStreamTest.shouldCloseLogStream unstabled,Status: Needs Review Type: Maintenance Type: Unstable Test,"**Description**
Failed sometimes in the CI.
```
[ERROR] Tests run: 4, Failures: 1, Errors: 0, Skipped: 0, Time elapsed: 2.074 s <<< FAILURE! - in io.zeebe.logstreams.log.LogStreamTest
[ERROR] io.zeebe.logstreams.log.LogStreamTest.shouldCloseLogStream Time elapsed: 0.683 s <<< FAILURE!
java.lang.AssertionError:
Expecting code to raise a throwable.
at io.zeebe.logstreams.log.LogStreamTest.shouldCloseLogStream(LogStreamTest.java:91)
at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.base/java.lang.reflect.Method.invoke(Method.java:566)
at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:50)
at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:47)
at org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26)
at org.junit.rules.ExternalResource$1.evaluate(ExternalResource.java:48)
at org.junit.rules.ExternalResource$1.evaluate(ExternalResource.java:48)
at org.junit.rules.RunRules.evaluate(RunRules.java:20)
at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:325)
at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:78)
at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:57)
at org.junit.runners.ParentRunner$3.run(ParentRunner.java:290)
at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:71)
at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:288)
at org.junit.runners.ParentRunner.access$000(ParentRunner.java:58)
at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:268)
at org.junit.runners.ParentRunner.run(ParentRunner.java:363)
at org.junit.runners.Suite.runChild(Suite.java:128)
at org.junit.runners.Suite.runChild(Suite.java:27)
at org.junit.runners.ParentRunner$3.run(ParentRunner.java:290)
at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:71)
at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:288)
at org.junit.runners.ParentRunner.access$000(ParentRunner.java:58)
at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:268)
at org.junit.runners.ParentRunner.run(ParentRunner.java:363)
at org.apache.maven.surefire.junitcore.JUnitCore.run(JUnitCore.java:55)
at org.apache.maven.surefire.junitcore.JUnitCoreWrapper.createRequestAndRun(JUnitCoreWrapper.java:137)
at org.apache.maven.surefire.junitcore.JUnitCoreWrapper.executeLazy(JUnitCoreWrapper.java:119)
at org.apache.maven.surefire.junitcore.JUnitCoreWrapper.execute(JUnitCoreWrapper.java:87)
at org.apache.maven.surefire.junitcore.JUnitCoreWrapper.execute(JUnitCoreWrapper.java:75)
at org.apache.maven.surefire.junitcore.JUnitCoreProvider.invoke(JUnitCoreProvider.java:158)
at org.apache.maven.surefire.booter.ForkedBooter.runSuitesInProcess(ForkedBooter.java:377)
at org.apache.maven.surefire.booter.ForkedBooter.execute(ForkedBooter.java:138)
at org.apache.maven.surefire.booter.ForkedBooter.run(ForkedBooter.java:465)
at org.apache.maven.surefire.booter.ForkedBooter.main(ForkedBooter.java:451)
```
Output:
```
12:13:49.119 [] [main] INFO io.zeebe.test - Test finished: shouldCreateNewLogStreamBatchWriter(io.zeebe.logstreams.log.LogStreamTest)
12:13:49.120 [] [main] INFO io.zeebe.test - Test started: shouldCloseLogStream(io.zeebe.logstreams.log.LogStreamTest)
12:13:49.313 [io.zeebe.logstreams.impl.LogStreamBuilder$1] [-zb-actors-3] WARN io.zeebe.logstreams - Unexpected non-empty log failed to read the last block
12:13:49.318 [io.zeebe.logstreams.impl.log.LogStreamImpl] [-zb-actors-3] WARN io.zeebe.logstreams - Unexpected non-empty log failed to read the last block
12:13:49.533 [io.zeebe.logstreams.impl.log.LogStreamImpl] [-zb-actors-1] DEBUG io.zeebe.logstreams - Configured log appender back pressure at partition 0 as AppenderVegasCfg{initialLimit=1024, maxConcurrency=32768, alphaLimit=0.7, betaLimit=0.95}. Window limiting is disabled
12:13:49.600 [io.zeebe.logstreams.impl.log.LogStreamImpl] [-zb-actors-3] INFO io.zeebe.logstreams - Close appender for log stream 0
12:13:49.601 [0-write-buffer] [-zb-actors-3] DEBUG io.zeebe.dispatcher - Dispatcher closed
12:13:49.602 [io.zeebe.logstreams.impl.log.LogStreamImpl] [-zb-actors-1] INFO io.zeebe.logstreams - On closing logstream 0 close 1 readers
12:13:49.603 [io.zeebe.logstreams.impl.log.LogStreamImpl] [-zb-actors-1] INFO io.zeebe.logstreams - Close log storage with name 0
```",1.0,"LogStreamTest.shouldCloseLogStream unstabled - **Description**
Failed sometimes in the CI.
```
[ERROR] Tests run: 4, Failures: 1, Errors: 0, Skipped: 0, Time elapsed: 2.074 s <<< FAILURE! - in io.zeebe.logstreams.log.LogStreamTest
[ERROR] io.zeebe.logstreams.log.LogStreamTest.shouldCloseLogStream Time elapsed: 0.683 s <<< FAILURE!
java.lang.AssertionError:
Expecting code to raise a throwable.
at io.zeebe.logstreams.log.LogStreamTest.shouldCloseLogStream(LogStreamTest.java:91)
at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.base/java.lang.reflect.Method.invoke(Method.java:566)
at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:50)
at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:47)
at org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26)
at org.junit.rules.ExternalResource$1.evaluate(ExternalResource.java:48)
at org.junit.rules.ExternalResource$1.evaluate(ExternalResource.java:48)
at org.junit.rules.RunRules.evaluate(RunRules.java:20)
at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:325)
at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:78)
at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:57)
at org.junit.runners.ParentRunner$3.run(ParentRunner.java:290)
at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:71)
at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:288)
at org.junit.runners.ParentRunner.access$000(ParentRunner.java:58)
at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:268)
at org.junit.runners.ParentRunner.run(ParentRunner.java:363)
at org.junit.runners.Suite.runChild(Suite.java:128)
at org.junit.runners.Suite.runChild(Suite.java:27)
at org.junit.runners.ParentRunner$3.run(ParentRunner.java:290)
at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:71)
at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:288)
at org.junit.runners.ParentRunner.access$000(ParentRunner.java:58)
at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:268)
at org.junit.runners.ParentRunner.run(ParentRunner.java:363)
at org.apache.maven.surefire.junitcore.JUnitCore.run(JUnitCore.java:55)
at org.apache.maven.surefire.junitcore.JUnitCoreWrapper.createRequestAndRun(JUnitCoreWrapper.java:137)
at org.apache.maven.surefire.junitcore.JUnitCoreWrapper.executeLazy(JUnitCoreWrapper.java:119)
at org.apache.maven.surefire.junitcore.JUnitCoreWrapper.execute(JUnitCoreWrapper.java:87)
at org.apache.maven.surefire.junitcore.JUnitCoreWrapper.execute(JUnitCoreWrapper.java:75)
at org.apache.maven.surefire.junitcore.JUnitCoreProvider.invoke(JUnitCoreProvider.java:158)
at org.apache.maven.surefire.booter.ForkedBooter.runSuitesInProcess(ForkedBooter.java:377)
at org.apache.maven.surefire.booter.ForkedBooter.execute(ForkedBooter.java:138)
at org.apache.maven.surefire.booter.ForkedBooter.run(ForkedBooter.java:465)
at org.apache.maven.surefire.booter.ForkedBooter.main(ForkedBooter.java:451)
```
Output:
```
12:13:49.119 [] [main] INFO io.zeebe.test - Test finished: shouldCreateNewLogStreamBatchWriter(io.zeebe.logstreams.log.LogStreamTest)
12:13:49.120 [] [main] INFO io.zeebe.test - Test started: shouldCloseLogStream(io.zeebe.logstreams.log.LogStreamTest)
12:13:49.313 [io.zeebe.logstreams.impl.LogStreamBuilder$1] [-zb-actors-3] WARN io.zeebe.logstreams - Unexpected non-empty log failed to read the last block
12:13:49.318 [io.zeebe.logstreams.impl.log.LogStreamImpl] [-zb-actors-3] WARN io.zeebe.logstreams - Unexpected non-empty log failed to read the last block
12:13:49.533 [io.zeebe.logstreams.impl.log.LogStreamImpl] [-zb-actors-1] DEBUG io.zeebe.logstreams - Configured log appender back pressure at partition 0 as AppenderVegasCfg{initialLimit=1024, maxConcurrency=32768, alphaLimit=0.7, betaLimit=0.95}. Window limiting is disabled
12:13:49.600 [io.zeebe.logstreams.impl.log.LogStreamImpl] [-zb-actors-3] INFO io.zeebe.logstreams - Close appender for log stream 0
12:13:49.601 [0-write-buffer] [-zb-actors-3] DEBUG io.zeebe.dispatcher - Dispatcher closed
12:13:49.602 [io.zeebe.logstreams.impl.log.LogStreamImpl] [-zb-actors-1] INFO io.zeebe.logstreams - On closing logstream 0 close 1 readers
12:13:49.603 [io.zeebe.logstreams.impl.log.LogStreamImpl] [-zb-actors-1] INFO io.zeebe.logstreams - Close log storage with name 0
```",0,logstreamtest shouldcloselogstream unstabled description failed sometimes in the ci tests run failures errors skipped time elapsed s failure in io zeebe logstreams log logstreamtest io zeebe logstreams log logstreamtest shouldcloselogstream time elapsed s failure java lang assertionerror expecting code to raise a throwable at io zeebe logstreams log logstreamtest shouldcloselogstream logstreamtest java at java base jdk internal reflect nativemethodaccessorimpl native method at java base jdk internal reflect nativemethodaccessorimpl invoke nativemethodaccessorimpl java at java base jdk internal reflect delegatingmethodaccessorimpl invoke delegatingmethodaccessorimpl java at java base java lang reflect method invoke method java at org junit runners model frameworkmethod runreflectivecall frameworkmethod java at org junit internal runners model reflectivecallable run reflectivecallable java at org junit runners model frameworkmethod invokeexplosively frameworkmethod java at org junit internal runners statements invokemethod evaluate invokemethod java at org junit internal runners statements runbefores evaluate runbefores java at org junit rules externalresource evaluate externalresource java at org junit rules externalresource evaluate externalresource java at org junit rules runrules evaluate runrules java at org junit runners parentrunner runleaf parentrunner java at org junit runners runchild java at org junit runners runchild java at org junit runners parentrunner run parentrunner java at org junit runners parentrunner schedule parentrunner java at org junit runners parentrunner runchildren parentrunner java at org junit runners parentrunner access parentrunner java at org junit runners parentrunner evaluate parentrunner java at org junit runners parentrunner run parentrunner java at org junit runners suite runchild suite java at org junit runners suite runchild suite java at org junit runners parentrunner run parentrunner java at org junit runners parentrunner schedule parentrunner java at org junit runners parentrunner runchildren parentrunner java at org junit runners parentrunner access parentrunner java at org junit runners parentrunner evaluate parentrunner java at org junit runners parentrunner run parentrunner java at org apache maven surefire junitcore junitcore run junitcore java at org apache maven surefire junitcore junitcorewrapper createrequestandrun junitcorewrapper java at org apache maven surefire junitcore junitcorewrapper executelazy junitcorewrapper java at org apache maven surefire junitcore junitcorewrapper execute junitcorewrapper java at org apache maven surefire junitcore junitcorewrapper execute junitcorewrapper java at org apache maven surefire junitcore junitcoreprovider invoke junitcoreprovider java at org apache maven surefire booter forkedbooter runsuitesinprocess forkedbooter java at org apache maven surefire booter forkedbooter execute forkedbooter java at org apache maven surefire booter forkedbooter run forkedbooter java at org apache maven surefire booter forkedbooter main forkedbooter java output info io zeebe test test finished shouldcreatenewlogstreambatchwriter io zeebe logstreams log logstreamtest info io zeebe test test started shouldcloselogstream io zeebe logstreams log logstreamtest warn io zeebe logstreams unexpected non empty log failed to read the last block warn io zeebe logstreams unexpected non empty log failed to read the last block debug io zeebe logstreams configured log appender back pressure at partition as appendervegascfg initiallimit maxconcurrency alphalimit betalimit window limiting is disabled info io zeebe logstreams close appender for log stream debug io zeebe dispatcher dispatcher closed info io zeebe logstreams on closing logstream close readers info io zeebe logstreams close log storage with name ,0
281243,30888436302.0,IssuesEvent,2023-08-04 01:19:37,hshivhare67/kernel_v4.1.15,https://api.github.com/repos/hshivhare67/kernel_v4.1.15,reopened,CVE-2017-12762 (Critical) detected in linuxlinux-4.6,Mend: dependency security vulnerability,"## CVE-2017-12762 - Critical Severity Vulnerability
Vulnerable Library - linuxlinux-4.6
In /drivers/isdn/i4l/isdn_net.c: A user-controlled buffer is copied into a local buffer of constant size using strcpy without a length check which can cause a buffer overflow. This affects the Linux kernel 4.9-stable tree, 4.12-stable tree, 3.18-stable tree, and 4.4-stable tree.
***
Step up your Open Source Security Game with Mend [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)",True,"CVE-2017-12762 (Critical) detected in linuxlinux-4.6 - ## CVE-2017-12762 - Critical Severity Vulnerability
Vulnerable Library - linuxlinux-4.6
In /drivers/isdn/i4l/isdn_net.c: A user-controlled buffer is copied into a local buffer of constant size using strcpy without a length check which can cause a buffer overflow. This affects the Linux kernel 4.9-stable tree, 4.12-stable tree, 3.18-stable tree, and 4.4-stable tree.
***
Step up your Open Source Security Game with Mend [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)",0,cve critical detected in linuxlinux cve critical severity vulnerability vulnerable library linuxlinux the linux kernel library home page a href found in base branch master vulnerable source files drivers isdn isdn common c drivers isdn isdn common c vulnerability details in drivers isdn isdn net c a user controlled buffer is copied into a local buffer of constant size using strcpy without a length check which can cause a buffer overflow this affects the linux kernel stable tree stable tree stable tree and stable tree publish date url a href cvss score details base score metrics exploitability metrics attack vector network attack complexity low privileges required none user interaction none scope unchanged impact metrics confidentiality impact high integrity impact high availability impact high for more information on scores click a href suggested fix type upgrade version release date fix resolution step up your open source security game with mend ,0
10156,31814696081.0,IssuesEvent,2023-09-13 19:30:56,figuren-theater/ft-platform,https://api.github.com/repos/figuren-theater/ft-platform,closed,Establish quality standards,automation,"```[tasklist]
### Repository Standards
- [x] Has nice [README.md](https://github.com/figuren-theater/new-ft-module/blob/main/README.md)
- [x] Add [`.github/workflows/ft-issue-gardening.yml`](https://github.com/figuren-theater/new-ft-module/blob/main/.github/workflows/ft-issue-gardening.yml) file (if not exists)
- [x] Add [`.github/workflows/release-drafter.yml`](https://github.com/figuren-theater/new-ft-module/blob/main/.github/workflows/release-drafter.yml) file
- [x] Delete [`.github/workflows/update-changelog.yml`](https://github.com/figuren-theater/new-ft-module/blob/main/.github/workflows/update-changelog.yml) file
- [x] Add [`.github/workflows/prerelease-changelog.yml`](https://github.com/figuren-theater/new-ft-module/blob/main/.github/workflows/prerelease-changelog.yml) file
- [x] Add [`.editorconfig`](https://github.com/figuren-theater/new-ft-module/blob/main/.editorconfig) file
- [x] Add [`.phpcs.xml`](https://github.com/figuren-theater/new-ft-module/blob/main/.phpcs.xml) file
- [x] Check that `.phpcs.xml` file is not present in `.gitignore`
- [x] Add [`CHANGELOG.md`](https://github.com/figuren-theater/new-ft-module/blob/main/CHANGELOG.md) file with an *Unreleased-Heading*
- [x] Add [`phpstan.neon`](https://github.com/figuren-theater/new-ft-module/blob/main/phpstan.neon) file
- [x] Run `composer require --dev figuren-theater/code-quality`
- [x] Run `composer normalize`
- [x] Run `vendor/bin/phpstan analyze .`
- [x] Run `vendor/bin/phpcs .`
- [x] Fix all errors ;)
- [x] commit, PR & merge all (additional) changes
- [x] Has branch protection enabled
- [x] Add [`.github/workflows/build-test-measure.yml`](https://github.com/figuren-theater/new-ft-module/blob/main/.github/workflows/build-test-measure.yml) file
- [x] Enable repo for required **Build, test & measure** status checks via [Repo Settings](/settings/actions)
- [x] Add **Build, test & measure** badge to the [code-quality](https://github.com/figuren-theater/code-quality) README
- [x] Submit repo to [packagist.org](https://packagist.org/packages/figuren-theater/)
- [x] Remove explicit `repositories` entry from [ft-platform](https://github.com/figuren-theater/ft-platform)s `composer.json`
- [x] Update `README.md` to see all workflows running
- [x] Publish the new drafted Release as Prerelease to trigger auto-updating versions in CHANGELOG.md and plugin.php --> THIS WILL A TRIGGER A DEPLOY !!!
- [ ] https://github.com/figuren-theater/ft-platform/issues/14
```
",1.0,"Establish quality standards - ```[tasklist]
### Repository Standards
- [x] Has nice [README.md](https://github.com/figuren-theater/new-ft-module/blob/main/README.md)
- [x] Add [`.github/workflows/ft-issue-gardening.yml`](https://github.com/figuren-theater/new-ft-module/blob/main/.github/workflows/ft-issue-gardening.yml) file (if not exists)
- [x] Add [`.github/workflows/release-drafter.yml`](https://github.com/figuren-theater/new-ft-module/blob/main/.github/workflows/release-drafter.yml) file
- [x] Delete [`.github/workflows/update-changelog.yml`](https://github.com/figuren-theater/new-ft-module/blob/main/.github/workflows/update-changelog.yml) file
- [x] Add [`.github/workflows/prerelease-changelog.yml`](https://github.com/figuren-theater/new-ft-module/blob/main/.github/workflows/prerelease-changelog.yml) file
- [x] Add [`.editorconfig`](https://github.com/figuren-theater/new-ft-module/blob/main/.editorconfig) file
- [x] Add [`.phpcs.xml`](https://github.com/figuren-theater/new-ft-module/blob/main/.phpcs.xml) file
- [x] Check that `.phpcs.xml` file is not present in `.gitignore`
- [x] Add [`CHANGELOG.md`](https://github.com/figuren-theater/new-ft-module/blob/main/CHANGELOG.md) file with an *Unreleased-Heading*
- [x] Add [`phpstan.neon`](https://github.com/figuren-theater/new-ft-module/blob/main/phpstan.neon) file
- [x] Run `composer require --dev figuren-theater/code-quality`
- [x] Run `composer normalize`
- [x] Run `vendor/bin/phpstan analyze .`
- [x] Run `vendor/bin/phpcs .`
- [x] Fix all errors ;)
- [x] commit, PR & merge all (additional) changes
- [x] Has branch protection enabled
- [x] Add [`.github/workflows/build-test-measure.yml`](https://github.com/figuren-theater/new-ft-module/blob/main/.github/workflows/build-test-measure.yml) file
- [x] Enable repo for required **Build, test & measure** status checks via [Repo Settings](/settings/actions)
- [x] Add **Build, test & measure** badge to the [code-quality](https://github.com/figuren-theater/code-quality) README
- [x] Submit repo to [packagist.org](https://packagist.org/packages/figuren-theater/)
- [x] Remove explicit `repositories` entry from [ft-platform](https://github.com/figuren-theater/ft-platform)s `composer.json`
- [x] Update `README.md` to see all workflows running
- [x] Publish the new drafted Release as Prerelease to trigger auto-updating versions in CHANGELOG.md and plugin.php --> THIS WILL A TRIGGER A DEPLOY !!!
- [ ] https://github.com/figuren-theater/ft-platform/issues/14
```
",1,establish quality standards repository standards has nice add file if not exists add file delete file add file add file add file check that phpcs xml file is not present in gitignore add file with an unreleased heading add file run composer require dev figuren theater code quality run composer normalize run vendor bin phpstan analyze run vendor bin phpcs fix all errors commit pr merge all additional changes has branch protection enabled add file enable repo for required build test measure status checks via settings actions add build test measure badge to the readme submit repo to remove explicit repositories entry from composer json update readme md to see all workflows running publish the new drafted release as prerelease to trigger auto updating versions in changelog md and plugin php this will a trigger a deploy ,1
5297,19029532929.0,IssuesEvent,2021-11-24 09:12:40,mozilla-mobile/firefox-ios,https://api.github.com/repos/mozilla-mobile/firefox-ios,opened,[L10n Tests] Update/remove l10n-screenshots.config.yml ,eng:automation,"If we can get the project's locale and use them while triggering the hook to start the Taskcluster job, we could remove this file.
If we can't do that, we should update this file as there are mismatches between the locales there and the locales we have to get screenshots for.",1.0,"[L10n Tests] Update/remove l10n-screenshots.config.yml - If we can get the project's locale and use them while triggering the hook to start the Taskcluster job, we could remove this file.
If we can't do that, we should update this file as there are mismatches between the locales there and the locales we have to get screenshots for.",1, update remove screenshots config yml if we can get the project s locale and use them while triggering the hook to start the taskcluster job we could remove this file if we can t do that we should update this file as there are mismatches between the locales there and the locales we have to get screenshots for ,1
8798,27172261107.0,IssuesEvent,2023-02-17 20:36:31,OneDrive/onedrive-api-docs,https://api.github.com/repos/OneDrive/onedrive-api-docs,closed,Files inheriting permissions do not show in delta query,status:investigating Needs: Triage :mag: automation:Closed,"#### Category
- [x] Question
- [x] Documentation issue
- [ ] Bug
#### Expected or Desired Behavior
According to the [docs](https://docs.microsoft.com/en-us/onedrive/developer/rest-api/concepts/scan-guidance?view=odsp-graph-online#scanning-permissions-hierarchies):
> By default, the delta query response will include sharing information for all items in the query that changed even if they inherit their permissions from their parent and did not have direct sharing changes themselves
For a directory `dir1` containing `fileA`, changing the permission on `dir1` should show `fileA` in the GET /delta response.
#### Observed Behavior
Changing permission on a directory does not produce items in the GET /delta query response for files inheriting the directory permission.
#### Steps to Reproduce
1. Create directory `dir1`
1. Create `fileA` in `dir1`
1. Get the latest delta token via `GET /users/{userId}/drive/root/delta?token=latest`
1. Create a shareable link on `dir1` giving everyone with the link access.
1. Get the latest changes via `GET /users/{userId}/drive/root/delta?token=`
1. `fileA` is not included in the respone
```
> GET /users/{userId}/drive/root/delta?token=
{
""@odata.context"": ""https://graph.microsoft.com/v1.0/$metadata#Collection(driveItem)"",
""@odata.deltaLink"": ""..."",
""value"": [
{
""@odata.type"": ""#microsoft.graph.driveItem"",
""name"": ""root"",
...
},
{
""@odata.type"": ""#microsoft.graph.driveItem"",
""name"": ""dir1"",
...
}
]
}
```",1.0,"Files inheriting permissions do not show in delta query - #### Category
- [x] Question
- [x] Documentation issue
- [ ] Bug
#### Expected or Desired Behavior
According to the [docs](https://docs.microsoft.com/en-us/onedrive/developer/rest-api/concepts/scan-guidance?view=odsp-graph-online#scanning-permissions-hierarchies):
> By default, the delta query response will include sharing information for all items in the query that changed even if they inherit their permissions from their parent and did not have direct sharing changes themselves
For a directory `dir1` containing `fileA`, changing the permission on `dir1` should show `fileA` in the GET /delta response.
#### Observed Behavior
Changing permission on a directory does not produce items in the GET /delta query response for files inheriting the directory permission.
#### Steps to Reproduce
1. Create directory `dir1`
1. Create `fileA` in `dir1`
1. Get the latest delta token via `GET /users/{userId}/drive/root/delta?token=latest`
1. Create a shareable link on `dir1` giving everyone with the link access.
1. Get the latest changes via `GET /users/{userId}/drive/root/delta?token=`
1. `fileA` is not included in the respone
```
> GET /users/{userId}/drive/root/delta?token=
{
""@odata.context"": ""https://graph.microsoft.com/v1.0/$metadata#Collection(driveItem)"",
""@odata.deltaLink"": ""..."",
""value"": [
{
""@odata.type"": ""#microsoft.graph.driveItem"",
""name"": ""root"",
...
},
{
""@odata.type"": ""#microsoft.graph.driveItem"",
""name"": ""dir1"",
...
}
]
}
```",1,files inheriting permissions do not show in delta query category question documentation issue bug expected or desired behavior according to the by default the delta query response will include sharing information for all items in the query that changed even if they inherit their permissions from their parent and did not have direct sharing changes themselves for a directory containing filea changing the permission on should show filea in the get delta response observed behavior changing permission on a directory does not produce items in the get delta query response for files inheriting the directory permission steps to reproduce create directory create filea in get the latest delta token via get users userid drive root delta token latest create a shareable link on giving everyone with the link access get the latest changes via get users userid drive root delta token filea is not included in the respone get users userid drive root delta token odata context odata deltalink value odata type microsoft graph driveitem name root odata type microsoft graph driveitem name ,1
4325,16087338687.0,IssuesEvent,2021-04-26 12:55:42,wazuh/wazuh-qa,https://api.github.com/repos/wazuh/wazuh-qa,opened,Integration tests for Vulnerability Detector: Amazon Linux support,automation core/vuln detector qa,"### Description
This issue is part of wazuh/wazuh#6734
We will include support for the Amazon Linux OS, including two new feeds, one for Amazon Linux and another one for Amazon Linux 2.
The following tests should cover the new supported OS:
- Feed tests: These tests have to cover the addition of the new feeds for Amazon Linux
- Provider tests: Here we will test the settings for the new provider
- Scan results tests: Here we have to add the tests that ensure Amazon Linux agents are scanned properly",1.0,"Integration tests for Vulnerability Detector: Amazon Linux support - ### Description
This issue is part of wazuh/wazuh#6734
We will include support for the Amazon Linux OS, including two new feeds, one for Amazon Linux and another one for Amazon Linux 2.
The following tests should cover the new supported OS:
- Feed tests: These tests have to cover the addition of the new feeds for Amazon Linux
- Provider tests: Here we will test the settings for the new provider
- Scan results tests: Here we have to add the tests that ensure Amazon Linux agents are scanned properly",1,integration tests for vulnerability detector amazon linux support description this issue is part of wazuh wazuh we will include support for the amazon linux os including two new feeds one for amazon linux and another one for amazon linux the following tests should cover the new supported os feed tests these tests have to cover the addition of the new feeds for amazon linux provider tests here we will test the settings for the new provider scan results tests here we have to add the tests that ensure amazon linux agents are scanned properly,1
3387,13631989065.0,IssuesEvent,2020-09-24 18:54:17,DiptoChakrabarty/Jenkins,https://api.github.com/repos/DiptoChakrabarty/Jenkins,opened,Implement Jenkins slave master infrastructure build up ,CI/CD automation devops enhancement hacktoberfest,Currently the role only provides the functionality to configure jenkins in remote systems we should carry this forward to implement jenkins master and slave model by specifying the number of slaves we desire along with their configuration details .,1.0,Implement Jenkins slave master infrastructure build up - Currently the role only provides the functionality to configure jenkins in remote systems we should carry this forward to implement jenkins master and slave model by specifying the number of slaves we desire along with their configuration details .,1,implement jenkins slave master infrastructure build up currently the role only provides the functionality to configure jenkins in remote systems we should carry this forward to implement jenkins master and slave model by specifying the number of slaves we desire along with their configuration details ,1
1193,9666047446.0,IssuesEvent,2019-05-21 09:52:45,research-software-reactor/cyclecloud,https://api.github.com/repos/research-software-reactor/cyclecloud,opened,Create automated deployment template using ARM,step2setup-automation,Probably excluding identity as this is being worked on separately. ,1.0,Create automated deployment template using ARM - Probably excluding identity as this is being worked on separately. ,1,create automated deployment template using arm probably excluding identity as this is being worked on separately ,1
61056,8484539202.0,IssuesEvent,2018-10-26 03:10:26,NicholasThrom/typesafe-json,https://api.github.com/repos/NicholasThrom/typesafe-json,closed,Add updating changelog to PR template,documentation,"Some cards need the changelog to be updated when they are merged, but I know I'm going to forgot to do that. A checkbox on PRs would be helpful.",1.0,"Add updating changelog to PR template - Some cards need the changelog to be updated when they are merged, but I know I'm going to forgot to do that. A checkbox on PRs would be helpful.",0,add updating changelog to pr template some cards need the changelog to be updated when they are merged but i know i m going to forgot to do that a checkbox on prs would be helpful ,0
114814,4646676741.0,IssuesEvent,2016-10-01 02:11:23,bbengfort/cloudscope,https://api.github.com/repos/bbengfort/cloudscope,closed,Federated Backpressure,in progress priority: high type: feature,"1. Version numbers get an additional component that can only be incremented by the Raft leader.
2. When Raft commits a write, it increments that counter
3. Because versions are compared starting from the Raft number first, this has the affect of making a committed write the most recent write in the system (e.g. +200 version number).
4. Dependencies are all tracked by the original version number
5. On eventual receipt of a higher version that is already in the log:
- find all dependencies of that version, and make their ""raft marker version"" equal to the parent.
- continue until ""latest local"" - then set that one to latest to write to and push in gossip.
I believe this system will eventually converge.
In Eventual only: the raft number is always 0, so eventual just works the same way as always
In Raft only: The ""raft number"" will be a monotonically increasing commit sequence but will have no other affect.
In Federated:
Given the following scenario:
```
A.1.0
/ \
A.2.0 A.3.0
| |
A.4.0 A.5.0
```
If A.2.0 goes to Raft, Raft will make it A.2.1 and will reject A.3.0; The replica that performed Anti-Entropy with Raft will make A.4.0 --> A.4.1 and when A.5.0 comes in via anti-entropy, A.4.1 > A.5.0 so A.5.0 will be tossed out.",1.0,"Federated Backpressure - 1. Version numbers get an additional component that can only be incremented by the Raft leader.
2. When Raft commits a write, it increments that counter
3. Because versions are compared starting from the Raft number first, this has the affect of making a committed write the most recent write in the system (e.g. +200 version number).
4. Dependencies are all tracked by the original version number
5. On eventual receipt of a higher version that is already in the log:
- find all dependencies of that version, and make their ""raft marker version"" equal to the parent.
- continue until ""latest local"" - then set that one to latest to write to and push in gossip.
I believe this system will eventually converge.
In Eventual only: the raft number is always 0, so eventual just works the same way as always
In Raft only: The ""raft number"" will be a monotonically increasing commit sequence but will have no other affect.
In Federated:
Given the following scenario:
```
A.1.0
/ \
A.2.0 A.3.0
| |
A.4.0 A.5.0
```
If A.2.0 goes to Raft, Raft will make it A.2.1 and will reject A.3.0; The replica that performed Anti-Entropy with Raft will make A.4.0 --> A.4.1 and when A.5.0 comes in via anti-entropy, A.4.1 > A.5.0 so A.5.0 will be tossed out.",0,federated backpressure version numbers get an additional component that can only be incremented by the raft leader when raft commits a write it increments that counter because versions are compared starting from the raft number first this has the affect of making a committed write the most recent write in the system e g version number dependencies are all tracked by the original version number on eventual receipt of a higher version that is already in the log find all dependencies of that version and make their raft marker version equal to the parent continue until latest local then set that one to latest to write to and push in gossip i believe this system will eventually converge in eventual only the raft number is always so eventual just works the same way as always in raft only the raft number will be a monotonically increasing commit sequence but will have no other affect in federated given the following scenario a a a a a if a goes to raft raft will make it a and will reject a the replica that performed anti entropy with raft will make a a and when a comes in via anti entropy a a so a will be tossed out ,0
2714,12467889645.0,IssuesEvent,2020-05-28 17:52:03,dotnet/interactive,https://api.github.com/repos/dotnet/interactive,opened,Command line method for converting between .ipynb and .dib,Area-Automation Area-Jupyter enhancement,"Currently, the VS Code extension contains logic for converting a `.ipynb` into a `.dib` and vice versa. In order to support automation scenarios, this functionality should be made available through a subcommand on the `dotnet-interactive` CLI and as API methods within `Microsoft.DotNet.Interactive.Jupyter`, e.g.:
```console
> dotnet interactive ipynb-to-dib /path/to/existing.ipynb /path/to/created.dib
```
Related: #467.
",1.0,"Command line method for converting between .ipynb and .dib - Currently, the VS Code extension contains logic for converting a `.ipynb` into a `.dib` and vice versa. In order to support automation scenarios, this functionality should be made available through a subcommand on the `dotnet-interactive` CLI and as API methods within `Microsoft.DotNet.Interactive.Jupyter`, e.g.:
```console
> dotnet interactive ipynb-to-dib /path/to/existing.ipynb /path/to/created.dib
```
Related: #467.
",1,command line method for converting between ipynb and dib currently the vs code extension contains logic for converting a ipynb into a dib and vice versa in order to support automation scenarios this functionality should be made available through a subcommand on the dotnet interactive cli and as api methods within microsoft dotnet interactive jupyter e g console dotnet interactive ipynb to dib path to existing ipynb path to created dib related ,1
392770,26957958685.0,IssuesEvent,2023-02-08 16:06:54,hyperledger/firefly,https://api.github.com/repos/hyperledger/firefly,closed,Need documentation on the updated blockchain operation structure,documentation,See https://miro.com/app/board/uXjVOWHk_6s=/?moveToWidget=3458764544594470770&cot=14 for the new `history` and `historySummary` fields.,1.0,Need documentation on the updated blockchain operation structure - See https://miro.com/app/board/uXjVOWHk_6s=/?moveToWidget=3458764544594470770&cot=14 for the new `history` and `historySummary` fields.,0,need documentation on the updated blockchain operation structure see for the new history and historysummary fields ,0
3748,14491739545.0,IssuesEvent,2020-12-11 05:24:52,longhorn/longhorn,https://api.github.com/repos/longhorn/longhorn,opened,[BUG]Engine Image fail to reach `ready` state if there are tainted worker node,area/manager bug priority/2 require/automation-e2e,"**Describe the bug**
Engine Image fail to reach `ready` state if there are tainted worker node in the cluster and Longhorn doesn't tolerate the taint.
**To Reproduce**
Steps to reproduce the behavior:
1. Install Longhorn
2. Taint one of the node, then delete the engine image daemonset to allow recreation
3. Volume operations (attach/detach) will fail due to engine image is not ready.
**Expected behavior**
Volume operations should still work.
**Log**
```
2020-12-09T13:34:19.849639616-05:00 time=""2020-12-09T18:34:19Z"" level=error msg=""Error in request: unable to attach volume pvc-xxx to xxx: cannot attach volume pvc-xxx7 with image longhornio/longhorn-engine:v1.0.2: engine image ei-ee18f965 (longhornio/longhorn-engine:v1.0.2) is not ready, it's deploying""
```
```
typemeta:
kind: """"
apiversion: """"
objectmeta:
name: engine-image-ei-ee18f965
...
status:
currentnumberscheduled: 6
numbermisscheduled: 1
desirednumberscheduled: 6
numberready: 6
observedgeneration: 1
updatednumberscheduled: 6
numberavailable: 6
numberunavailable: 0
collisioncount: null
conditions: []
```
**Environment:**
- Longhorn version: v1.0.2
- Kubernetes distro (e.g. RKE/K3s/EKS/OpenShift) and version: k3s v1.18.6
- Node config
- OS type and version: RHEL
- CPU per node: 4
- Memory per node: 8
- Disk type(e.g. SSD/NVMe): SSD
- Network bandwidth and latency between the nodes: 10GB
- Underlying Infrastructure (e.g. on AWS/GCE, EKS/GKE, VMWare/KVM, Baremetal): VMWare
**Additional context**
It's due to the tainted node prevent the daemonset to reach the full availability, result in engine image object failed to reach ready state.
",1.0,"[BUG]Engine Image fail to reach `ready` state if there are tainted worker node - **Describe the bug**
Engine Image fail to reach `ready` state if there are tainted worker node in the cluster and Longhorn doesn't tolerate the taint.
**To Reproduce**
Steps to reproduce the behavior:
1. Install Longhorn
2. Taint one of the node, then delete the engine image daemonset to allow recreation
3. Volume operations (attach/detach) will fail due to engine image is not ready.
**Expected behavior**
Volume operations should still work.
**Log**
```
2020-12-09T13:34:19.849639616-05:00 time=""2020-12-09T18:34:19Z"" level=error msg=""Error in request: unable to attach volume pvc-xxx to xxx: cannot attach volume pvc-xxx7 with image longhornio/longhorn-engine:v1.0.2: engine image ei-ee18f965 (longhornio/longhorn-engine:v1.0.2) is not ready, it's deploying""
```
```
typemeta:
kind: """"
apiversion: """"
objectmeta:
name: engine-image-ei-ee18f965
...
status:
currentnumberscheduled: 6
numbermisscheduled: 1
desirednumberscheduled: 6
numberready: 6
observedgeneration: 1
updatednumberscheduled: 6
numberavailable: 6
numberunavailable: 0
collisioncount: null
conditions: []
```
**Environment:**
- Longhorn version: v1.0.2
- Kubernetes distro (e.g. RKE/K3s/EKS/OpenShift) and version: k3s v1.18.6
- Node config
- OS type and version: RHEL
- CPU per node: 4
- Memory per node: 8
- Disk type(e.g. SSD/NVMe): SSD
- Network bandwidth and latency between the nodes: 10GB
- Underlying Infrastructure (e.g. on AWS/GCE, EKS/GKE, VMWare/KVM, Baremetal): VMWare
**Additional context**
It's due to the tainted node prevent the daemonset to reach the full availability, result in engine image object failed to reach ready state.
",1, engine image fail to reach ready state if there are tainted worker node describe the bug engine image fail to reach ready state if there are tainted worker node in the cluster and longhorn doesn t tolerate the taint to reproduce steps to reproduce the behavior install longhorn taint one of the node then delete the engine image daemonset to allow recreation volume operations attach detach will fail due to engine image is not ready expected behavior volume operations should still work log time level error msg error in request unable to attach volume pvc xxx to xxx cannot attach volume pvc with image longhornio longhorn engine engine image ei longhornio longhorn engine is not ready it s deploying typemeta kind apiversion objectmeta name engine image ei status currentnumberscheduled numbermisscheduled desirednumberscheduled numberready observedgeneration updatednumberscheduled numberavailable numberunavailable collisioncount null conditions environment longhorn version kubernetes distro e g rke eks openshift and version node config os type and version rhel cpu per node memory per node disk type e g ssd nvme ssd network bandwidth and latency between the nodes underlying infrastructure e g on aws gce eks gke vmware kvm baremetal vmware additional context it s due to the tainted node prevent the daemonset to reach the full availability result in engine image object failed to reach ready state ,1
110941,17009632871.0,IssuesEvent,2021-07-02 00:58:39,Chiencc/angular,https://api.github.com/repos/Chiencc/angular,opened,"CVE-2018-20677 (Medium) detected in bootstrap-3.3.7.tgz, bootstrap-3.1.1.tgz",security vulnerability,"## CVE-2018-20677 - Medium Severity Vulnerability
Vulnerable Libraries - bootstrap-3.3.7.tgz, bootstrap-3.1.1.tgz
bootstrap-3.3.7.tgz
The most popular front-end framework for developing responsive, mobile first projects on the web.
***
Step up your Open Source Security Game with WhiteSource [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)",True,"CVE-2018-20677 (Medium) detected in bootstrap-3.3.7.tgz, bootstrap-3.1.1.tgz - ## CVE-2018-20677 - Medium Severity Vulnerability
Vulnerable Libraries - bootstrap-3.3.7.tgz, bootstrap-3.1.1.tgz
bootstrap-3.3.7.tgz
The most popular front-end framework for developing responsive, mobile first projects on the web.
***
Step up your Open Source Security Game with WhiteSource [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)",0,cve medium detected in bootstrap tgz bootstrap tgz cve medium severity vulnerability vulnerable libraries bootstrap tgz bootstrap tgz bootstrap tgz the most popular front end framework for developing responsive mobile first projects on the web library home page a href path to dependency file angular package json path to vulnerable library angular node modules bootstrap dependency hierarchy angular benchpress tgz root library x bootstrap tgz vulnerable library bootstrap tgz sleek intuitive and powerful front end framework for faster and easier web development library home page a href path to dependency file angular package json path to vulnerable library angular node modules bootstrap dependency hierarchy x bootstrap tgz vulnerable library found in head commit a href found in base branch master vulnerability details in bootstrap before xss is possible in the affix configuration target property publish date url a href cvss score details base score metrics exploitability metrics attack vector network attack complexity low privileges required none user interaction required scope changed impact metrics confidentiality impact low integrity impact low availability impact none for more information on scores click a href suggested fix type upgrade version origin a href release date fix resolution bootstrap nordron angulartemplate dynamic net express projecttemplates dotnetng template znxtapp core module theme beta jmeter step up your open source security game with whitesource ,0
487104,14019329923.0,IssuesEvent,2020-10-29 18:01:34,Elle624/FitLit-Refactor,https://api.github.com/repos/Elle624/FitLit-Refactor,closed,Refactor hydration card in scripts,Priority 1 html js refactor,"We want to refactor the hydration related code in the scripts file into functions that determine the change of information that is displayed upon the specific events
- [x] Group innerText
- [x] Group innerHTML
- [x] Wrap into functions if necessary",1.0,"Refactor hydration card in scripts - We want to refactor the hydration related code in the scripts file into functions that determine the change of information that is displayed upon the specific events
- [x] Group innerText
- [x] Group innerHTML
- [x] Wrap into functions if necessary",0,refactor hydration card in scripts we want to refactor the hydration related code in the scripts file into functions that determine the change of information that is displayed upon the specific events group innertext group innerhtml wrap into functions if necessary,0
9308,27957236710.0,IssuesEvent,2023-03-24 13:17:27,gchq/gaffer-docker,https://api.github.com/repos/gchq/gaffer-docker,closed,Improve log output of publish_images.sh,automation,"It should be clear what image and tag is being pushed and whether it was successful or not.
As well as this, it would be much clearer if the docker build output wasn't there.
",1.0,"Improve log output of publish_images.sh - It should be clear what image and tag is being pushed and whether it was successful or not.
As well as this, it would be much clearer if the docker build output wasn't there.
",1,improve log output of publish images sh it should be clear what image and tag is being pushed and whether it was successful or not as well as this it would be much clearer if the docker build output wasn t there ,1
271958,29794962629.0,IssuesEvent,2023-06-16 01:00:16,billmcchesney1/hadoop,https://api.github.com/repos/billmcchesney1/hadoop,closed,CVE-2016-10744 (Medium) detected in select2-4.0.0.tgz - autoclosed,Mend: dependency security vulnerability,"## CVE-2016-10744 - Medium Severity Vulnerability
Vulnerable Library - select2-4.0.0.tgz
Select2 is a jQuery based replacement for select boxes. It supports searching, remote data sets, and infinite scrolling of results.
In Select2 through 4.0.5, as used in Snipe-IT and other products, rich selectlists allow XSS. This affects use cases with Ajax remote data loading when HTML templates are used to display listbox data.
In Select2 through 4.0.5, as used in Snipe-IT and other products, rich selectlists allow XSS. This affects use cases with Ajax remote data loading when HTML templates are used to display listbox data.
***
- [ ] Check this box to open an automated fix PR
",0,cve medium detected in tgz autoclosed cve medium severity vulnerability vulnerable library tgz is a jquery based replacement for select boxes it supports searching remote data sets and infinite scrolling of results library home page a href path to dependency file hadoop yarn project hadoop yarn hadoop yarn ui src main webapp package json path to vulnerable library hadoop yarn project hadoop yarn hadoop yarn ui src main webapp node modules package json dependency hierarchy x tgz vulnerable library found in base branch trunk vulnerability details in through as used in snipe it and other products rich selectlists allow xss this affects use cases with ajax remote data loading when html templates are used to display listbox data publish date url a href cvss score details base score metrics exploitability metrics attack vector network attack complexity low privileges required none user interaction required scope changed impact metrics confidentiality impact low integrity impact low availability impact none for more information on scores click a href suggested fix type upgrade version origin a href release date fix resolution check this box to open an automated fix pr ,0
468,6554443771.0,IssuesEvent,2017-09-06 05:55:56,inboundnow/inbound-pro,https://api.github.com/repos/inboundnow/inbound-pro,opened,[automation/email] Make sure DISABLE_WP_CRON is not set to true and notify user when it is,Automation Mailer UX Enhancement,This will help to prevent customer support requests and UX issues with automation/mailer when users have this flagged to true and are not aware of it. ,1.0,[automation/email] Make sure DISABLE_WP_CRON is not set to true and notify user when it is - This will help to prevent customer support requests and UX issues with automation/mailer when users have this flagged to true and are not aware of it. ,1, make sure disable wp cron is not set to true and notify user when it is this will help to prevent customer support requests and ux issues with automation mailer when users have this flagged to true and are not aware of it ,1
7076,24190043675.0,IssuesEvent,2022-09-23 16:35:37,bcgov/api-services-portal,https://api.github.com/repos/bcgov/api-services-portal,closed,FAILED: Automated Tests(114),automation,"Stats: {
""suites"": 44,
""tests"": 322,
""passes"": 208,
""pending"": 0,
""failures"": 114,
""start"": ""2022-09-21T06:13:43.771Z"",
""end"": ""2022-09-21T06:57:39.746Z"",
""duration"": 945206,
""testsRegistered"": 322,
""passPercent"": 64.59627329192547,
""pendingPercent"": 0,
""other"": 0,
""hasOther"": false,
""skipped"": 0,
""hasSkipped"": false
}
Failed Tests:
""activate the service for Test environment""
""activate the service for Dev environment""
""grant namespace access to Mark (access manager)""
""Grant CredentialIssuer.Admin permission to Janis (API Owner)""
""authenticates Mark (Access-Manager)""
""authenticates Mark (Access-Manager)""
""verify the request details""
""Add group labels in request details window""
""approves an access request""
""Verify that API is accessible with the generated API Key""
""authenticates Mark (Access-Manager)""
""Navigate to Consumer page and filter the product""
""Click on the first consumer""
""Click on Grant Access button""
""Grant Access to Test environment""
""Verify that API is accessible with the generated API Key for Test environment""
""authenticates Mark (Access Manager)""
""Navigate to Consumer page and filter the product""
""Select the consumer from the list""
""set IP address that is not accessible in the network as allowed IP and set Route as scope""
""verify IP Restriction error when the API calls other than the allowed IP""
""set IP address that is not accessible in the network as allowed IP and set service as scope""
""verify IP Restriction error when the API calls other than the allowed IP""
""set IP address that is accessible in the network as allowed IP and set route as scope""
""set IP address that is accessible in the network as allowed IP and set service as scope""
""Navigate to Consumer page and filter the product""
""set api ip-restriction to global service level""
""set IP address that is not accessible in the network as allowed IP and set service as scope""
""verify IP Restriction error when the API calls other than the allowed IP""
""Navigate to Consumer page and filter the product""
""set api ip-restriction to global service level""
""set IP address that is not accessible in the network as allowed IP and set service as scope""
""verify IP Restriction error when the API calls other than the allowed IP""
""authenticates Mark (Access Manager)""
""Navigate to Consumer page and filter the product""
""Select the consumer from the list ""
""set api rate limit as per the test config, Local Policy and Scope as Service""
""verify rate limit error when the API calls beyond the limit""
""set api rate limit as per the test config, Local Policy and Scope as Route""
""verify rate limit error when the API calls beyond the limit""
""set api rate limit as per the test config, Redis Policy and Scope as Service""
""verify rate limit error when the API calls beyond the limit""
""set api rate limit as per the test config, Redis Policy and Scope as Route""
""verify rate limit error when the API calls beyond the limit""
""set api rate limit to global service level""
""Verify that Rate limiting is set at global service level""
""set api rate limit as per the test config, Redis Policy and Scope as Service""
""verify rate limit error when the API calls beyond the limit""
""set api rate limit to global service level""
""Verify that Rate limiting is set at global service level""
""set api rate limit as per the test config, Redis Policy and Scope as Service""
""verify rate limit error when the API calls beyond the limit""
""authenticates Mark (Access-Manager)""
""verify the request details""
""Add group labels in request details window""
""approves an access request""
""authenticates Mark (Access-Manager)""
""verify that consumers are filters as per given parameter""
""authenticates Mark (Access-Manager)""
""Navigate to Consumer page and filter the product""
""Click on the first consumer""
""Verify that labels can be deleted""
""Verify that labels can be updated""
""Verify that labels can be added""
""Grant namespace access to access manager(Mark)""
""Grant CredentialIssuer.Admin permission to credential issuer(Wendy)""
""Select the namespace created for client credential ""
""Creates authorization profile for Client ID/Secret""
""Creates authorization profile for JWT - Generated Key Pair""
""Creates authorization profile for JWKS URL""
""Adds environment with Client ID/Secret authenticator to product""
""Adds environment with JWT - Generated Key Pair authenticator to product""
""Adds environment with JWT - JWKS URL authenticator to product""
""Applies authorization plugin to service published to Kong Gateway""
""activate the service for Test environment""
""Creates an access request""
""Access Manager logs in""
""approves an access request""
""Get access token using client ID and secret; make API request""
""Creates an access request""
""Access Manager logs in""
""approves an access request""
""Get access token using JWT key pair; make API request""
""Creates an access request""
""Access Manager logs in""
""approves an access request""
""Get access token using JWT key pair; make API request""
""Get current API Key""
""Verify that only one API key(new key) is set to the consumer in Kong gateway""
""Verify that API is not accessible with the old API Key""
""Regenrate credential client ID and Secret""
""Make sure that the old client ID and Secret is disabled""
""grant namespace access to Mark (access manager)""
""Grant permission to Janis (API Owner)""
""Grant permission to Wendy""
""Grant \""Access.Manager\"" access to Mark (access manager)""
""Authenticates Mark (Access-Manager)""
""Verify that the option to approve request is displayed""
""Grant only \""Namespace.Manage\"" permission to Wendy""
""Authenticates Wendy (Credential-Issuer)""
""Verify that all the namespace options and activities are displayed""
""Grant only \""CredentialIssuer.Admin\"" access to Wendy (access manager)""
""Authenticates Wendy (Credential-Issuer)""
""Verify that only Authorization Profile option is displayed in Namespace page""
""Verify that authorization profile for Client ID/Secret is generated""
""Grant only \""Namespace.View\"" permission to Mark""
""authenticates Mark""
""Verify that service accounts are not created""
""Grant \""GatewayConfig.Publish\"" and \""Namespace.View\"" access to Wendy (access manager)""
""Authenticates Wendy (Credential-Issuer)""
""Verify that GWA API allows user to publish the API to Kong gateway""
""Delete the product environment and verify the success code in the response""
""Get the resource and verify that product environment is deleted""
""Force delete the namespace and verify the success code in the response""
Run Link: https://github.com/bcgov/api-services-portal/actions/runs/3095486922",1.0,"FAILED: Automated Tests(114) - Stats: {
""suites"": 44,
""tests"": 322,
""passes"": 208,
""pending"": 0,
""failures"": 114,
""start"": ""2022-09-21T06:13:43.771Z"",
""end"": ""2022-09-21T06:57:39.746Z"",
""duration"": 945206,
""testsRegistered"": 322,
""passPercent"": 64.59627329192547,
""pendingPercent"": 0,
""other"": 0,
""hasOther"": false,
""skipped"": 0,
""hasSkipped"": false
}
Failed Tests:
""activate the service for Test environment""
""activate the service for Dev environment""
""grant namespace access to Mark (access manager)""
""Grant CredentialIssuer.Admin permission to Janis (API Owner)""
""authenticates Mark (Access-Manager)""
""authenticates Mark (Access-Manager)""
""verify the request details""
""Add group labels in request details window""
""approves an access request""
""Verify that API is accessible with the generated API Key""
""authenticates Mark (Access-Manager)""
""Navigate to Consumer page and filter the product""
""Click on the first consumer""
""Click on Grant Access button""
""Grant Access to Test environment""
""Verify that API is accessible with the generated API Key for Test environment""
""authenticates Mark (Access Manager)""
""Navigate to Consumer page and filter the product""
""Select the consumer from the list""
""set IP address that is not accessible in the network as allowed IP and set Route as scope""
""verify IP Restriction error when the API calls other than the allowed IP""
""set IP address that is not accessible in the network as allowed IP and set service as scope""
""verify IP Restriction error when the API calls other than the allowed IP""
""set IP address that is accessible in the network as allowed IP and set route as scope""
""set IP address that is accessible in the network as allowed IP and set service as scope""
""Navigate to Consumer page and filter the product""
""set api ip-restriction to global service level""
""set IP address that is not accessible in the network as allowed IP and set service as scope""
""verify IP Restriction error when the API calls other than the allowed IP""
""Navigate to Consumer page and filter the product""
""set api ip-restriction to global service level""
""set IP address that is not accessible in the network as allowed IP and set service as scope""
""verify IP Restriction error when the API calls other than the allowed IP""
""authenticates Mark (Access Manager)""
""Navigate to Consumer page and filter the product""
""Select the consumer from the list ""
""set api rate limit as per the test config, Local Policy and Scope as Service""
""verify rate limit error when the API calls beyond the limit""
""set api rate limit as per the test config, Local Policy and Scope as Route""
""verify rate limit error when the API calls beyond the limit""
""set api rate limit as per the test config, Redis Policy and Scope as Service""
""verify rate limit error when the API calls beyond the limit""
""set api rate limit as per the test config, Redis Policy and Scope as Route""
""verify rate limit error when the API calls beyond the limit""
""set api rate limit to global service level""
""Verify that Rate limiting is set at global service level""
""set api rate limit as per the test config, Redis Policy and Scope as Service""
""verify rate limit error when the API calls beyond the limit""
""set api rate limit to global service level""
""Verify that Rate limiting is set at global service level""
""set api rate limit as per the test config, Redis Policy and Scope as Service""
""verify rate limit error when the API calls beyond the limit""
""authenticates Mark (Access-Manager)""
""verify the request details""
""Add group labels in request details window""
""approves an access request""
""authenticates Mark (Access-Manager)""
""verify that consumers are filters as per given parameter""
""authenticates Mark (Access-Manager)""
""Navigate to Consumer page and filter the product""
""Click on the first consumer""
""Verify that labels can be deleted""
""Verify that labels can be updated""
""Verify that labels can be added""
""Grant namespace access to access manager(Mark)""
""Grant CredentialIssuer.Admin permission to credential issuer(Wendy)""
""Select the namespace created for client credential ""
""Creates authorization profile for Client ID/Secret""
""Creates authorization profile for JWT - Generated Key Pair""
""Creates authorization profile for JWKS URL""
""Adds environment with Client ID/Secret authenticator to product""
""Adds environment with JWT - Generated Key Pair authenticator to product""
""Adds environment with JWT - JWKS URL authenticator to product""
""Applies authorization plugin to service published to Kong Gateway""
""activate the service for Test environment""
""Creates an access request""
""Access Manager logs in""
""approves an access request""
""Get access token using client ID and secret; make API request""
""Creates an access request""
""Access Manager logs in""
""approves an access request""
""Get access token using JWT key pair; make API request""
""Creates an access request""
""Access Manager logs in""
""approves an access request""
""Get access token using JWT key pair; make API request""
""Get current API Key""
""Verify that only one API key(new key) is set to the consumer in Kong gateway""
""Verify that API is not accessible with the old API Key""
""Regenrate credential client ID and Secret""
""Make sure that the old client ID and Secret is disabled""
""grant namespace access to Mark (access manager)""
""Grant permission to Janis (API Owner)""
""Grant permission to Wendy""
""Grant \""Access.Manager\"" access to Mark (access manager)""
""Authenticates Mark (Access-Manager)""
""Verify that the option to approve request is displayed""
""Grant only \""Namespace.Manage\"" permission to Wendy""
""Authenticates Wendy (Credential-Issuer)""
""Verify that all the namespace options and activities are displayed""
""Grant only \""CredentialIssuer.Admin\"" access to Wendy (access manager)""
""Authenticates Wendy (Credential-Issuer)""
""Verify that only Authorization Profile option is displayed in Namespace page""
""Verify that authorization profile for Client ID/Secret is generated""
""Grant only \""Namespace.View\"" permission to Mark""
""authenticates Mark""
""Verify that service accounts are not created""
""Grant \""GatewayConfig.Publish\"" and \""Namespace.View\"" access to Wendy (access manager)""
""Authenticates Wendy (Credential-Issuer)""
""Verify that GWA API allows user to publish the API to Kong gateway""
""Delete the product environment and verify the success code in the response""
""Get the resource and verify that product environment is deleted""
""Force delete the namespace and verify the success code in the response""
Run Link: https://github.com/bcgov/api-services-portal/actions/runs/3095486922",1,failed automated tests stats suites tests passes pending failures start end duration testsregistered passpercent pendingpercent other hasother false skipped hasskipped false failed tests activate the service for test environment activate the service for dev environment grant namespace access to mark access manager grant credentialissuer admin permission to janis api owner authenticates mark access manager authenticates mark access manager verify the request details add group labels in request details window approves an access request verify that api is accessible with the generated api key authenticates mark access manager navigate to consumer page and filter the product click on the first consumer click on grant access button grant access to test environment verify that api is accessible with the generated api key for test environment authenticates mark access manager navigate to consumer page and filter the product select the consumer from the list set ip address that is not accessible in the network as allowed ip and set route as scope verify ip restriction error when the api calls other than the allowed ip set ip address that is not accessible in the network as allowed ip and set service as scope verify ip restriction error when the api calls other than the allowed ip set ip address that is accessible in the network as allowed ip and set route as scope set ip address that is accessible in the network as allowed ip and set service as scope navigate to consumer page and filter the product set api ip restriction to global service level set ip address that is not accessible in the network as allowed ip and set service as scope verify ip restriction error when the api calls other than the allowed ip navigate to consumer page and filter the product set api ip restriction to global service level set ip address that is not accessible in the network as allowed ip and set service as scope verify ip restriction error when the api calls other than the allowed ip authenticates mark access manager navigate to consumer page and filter the product select the consumer from the list set api rate limit as per the test config local policy and scope as service verify rate limit error when the api calls beyond the limit set api rate limit as per the test config local policy and scope as route verify rate limit error when the api calls beyond the limit set api rate limit as per the test config redis policy and scope as service verify rate limit error when the api calls beyond the limit set api rate limit as per the test config redis policy and scope as route verify rate limit error when the api calls beyond the limit set api rate limit to global service level verify that rate limiting is set at global service level set api rate limit as per the test config redis policy and scope as service verify rate limit error when the api calls beyond the limit set api rate limit to global service level verify that rate limiting is set at global service level set api rate limit as per the test config redis policy and scope as service verify rate limit error when the api calls beyond the limit authenticates mark access manager verify the request details add group labels in request details window approves an access request authenticates mark access manager verify that consumers are filters as per given parameter authenticates mark access manager navigate to consumer page and filter the product click on the first consumer verify that labels can be deleted verify that labels can be updated verify that labels can be added grant namespace access to access manager mark grant credentialissuer admin permission to credential issuer wendy select the namespace created for client credential creates authorization profile for client id secret creates authorization profile for jwt generated key pair creates authorization profile for jwks url adds environment with client id secret authenticator to product adds environment with jwt generated key pair authenticator to product adds environment with jwt jwks url authenticator to product applies authorization plugin to service published to kong gateway activate the service for test environment creates an access request access manager logs in approves an access request get access token using client id and secret make api request creates an access request access manager logs in approves an access request get access token using jwt key pair make api request creates an access request access manager logs in approves an access request get access token using jwt key pair make api request get current api key verify that only one api key new key is set to the consumer in kong gateway verify that api is not accessible with the old api key regenrate credential client id and secret make sure that the old client id and secret is disabled grant namespace access to mark access manager grant permission to janis api owner grant permission to wendy grant access manager access to mark access manager authenticates mark access manager verify that the option to approve request is displayed grant only namespace manage permission to wendy authenticates wendy credential issuer verify that all the namespace options and activities are displayed grant only credentialissuer admin access to wendy access manager authenticates wendy credential issuer verify that only authorization profile option is displayed in namespace page verify that authorization profile for client id secret is generated grant only namespace view permission to mark authenticates mark verify that service accounts are not created grant gatewayconfig publish and namespace view access to wendy access manager authenticates wendy credential issuer verify that gwa api allows user to publish the api to kong gateway delete the product environment and verify the success code in the response get the resource and verify that product environment is deleted force delete the namespace and verify the success code in the response run link ,1
4823,17651662898.0,IssuesEvent,2021-08-20 13:59:22,CDCgov/prime-field-teams,https://api.github.com/repos/CDCgov/prime-field-teams,closed,Solution Imp - Plan & Schedule Go Live,State-Louisiana sender-automation,"**Temporary New Daily Process:**
- iPatientcare will produce a daily CSV file by running their stored procedure and copying the results with Headers to a CSV file stored in the /out/directory.
- A6 will:
- From the /Public Health/VBScript directory, A6 will run ""Reddy-Send.bat""
- A6 will review the .REP and .TXT files for errors.
Note, a new Issue will be created when we begin development of the 100% automation of the Reddy process.
**Potential Final Daily Process:**
- iPatientcare will automate the execution and creation of the CSV file.
- A6 will create a scheduled task to execute ""Reddy-Send.bat"" daily.
- Dr. Reddy's office will be responsible for reviewing the error files.",1.0,"Solution Imp - Plan & Schedule Go Live - **Temporary New Daily Process:**
- iPatientcare will produce a daily CSV file by running their stored procedure and copying the results with Headers to a CSV file stored in the /out/directory.
- A6 will:
- From the /Public Health/VBScript directory, A6 will run ""Reddy-Send.bat""
- A6 will review the .REP and .TXT files for errors.
Note, a new Issue will be created when we begin development of the 100% automation of the Reddy process.
**Potential Final Daily Process:**
- iPatientcare will automate the execution and creation of the CSV file.
- A6 will create a scheduled task to execute ""Reddy-Send.bat"" daily.
- Dr. Reddy's office will be responsible for reviewing the error files.",1,solution imp plan schedule go live temporary new daily process ipatientcare will produce a daily csv file by running their stored procedure and copying the results with headers to a csv file stored in the out directory will from the public health vbscript directory will run reddy send bat will review the rep and txt files for errors note a new issue will be created when we begin development of the automation of the reddy process potential final daily process ipatientcare will automate the execution and creation of the csv file will create a scheduled task to execute reddy send bat daily dr reddy s office will be responsible for reviewing the error files ,1
25763,19102186016.0,IssuesEvent,2021-11-30 00:28:17,stylelint/stylelint,https://api.github.com/repos/stylelint/stylelint,closed,Use codecov for coverage,status: ready to implement type: infrastructure,"## What is a problem
Builds on Coveralls have not worked for a while:
https://coveralls.io/github/stylelint/stylelint
Even when seeing the log of GitHub Actions, we cannot know anything... 🤷🏼
https://github.com/stylelint/stylelint/runs/4336137130?check_suite_focus=true#step:8:8
I've resynced with the GitHub repository on the Coveralls repo settings, but errors still occur:
https://github.com/stylelint/stylelint/pull/5743#issuecomment-980070312
## How to solve
So, I think there are the following options:
1. Stop using Coveralls
2. Migrate to another service
For example, [Codecov](https://codecov.io/) is an option. When I've tried my forked repository, it works:
- https://codecov.io/gh/ybiquitous/stylelint/tree/401071d44695a610413a926fef873916ab66c102
- https://github.com/ybiquitous/stylelint/runs/4346303358?check_suite_focus=true#step:8:32
- https://github.com/ybiquitous/stylelint/pull/2
Does anyone have thoughts?
",1.0,"Use codecov for coverage - ## What is a problem
Builds on Coveralls have not worked for a while:
https://coveralls.io/github/stylelint/stylelint
Even when seeing the log of GitHub Actions, we cannot know anything... 🤷🏼
https://github.com/stylelint/stylelint/runs/4336137130?check_suite_focus=true#step:8:8
I've resynced with the GitHub repository on the Coveralls repo settings, but errors still occur:
https://github.com/stylelint/stylelint/pull/5743#issuecomment-980070312
## How to solve
So, I think there are the following options:
1. Stop using Coveralls
2. Migrate to another service
For example, [Codecov](https://codecov.io/) is an option. When I've tried my forked repository, it works:
- https://codecov.io/gh/ybiquitous/stylelint/tree/401071d44695a610413a926fef873916ab66c102
- https://github.com/ybiquitous/stylelint/runs/4346303358?check_suite_focus=true#step:8:32
- https://github.com/ybiquitous/stylelint/pull/2
Does anyone have thoughts?
",0,use codecov for coverage what is a problem builds on coveralls have not worked for a while even when seeing the log of github actions we cannot know anything 🤷🏼 i ve resynced with the github repository on the coveralls repo settings but errors still occur how to solve so i think there are the following options stop using coveralls migrate to another service for example is an option when i ve tried my forked repository it works does anyone have thoughts ,0
134252,29935643034.0,IssuesEvent,2023-06-22 12:35:31,max-kamps/jpd-breader,https://api.github.com/repos/max-kamps/jpd-breader,opened,Switch to Manifest V3,code enhancement blocked,"Current blockers:
- No support for background service workers in Firefox
- Possible workaround: Continue using background pages on Firefox (would require two separate manifests)
- No support for `DOMParser` in background service workers
- Required because no JPDB api support for FORQing and reviewing
",1.0,"Switch to Manifest V3 - Current blockers:
- No support for background service workers in Firefox
- Possible workaround: Continue using background pages on Firefox (would require two separate manifests)
- No support for `DOMParser` in background service workers
- Required because no JPDB api support for FORQing and reviewing
",0,switch to manifest current blockers no support for background service workers in firefox possible workaround continue using background pages on firefox would require two separate manifests no support for domparser in background service workers required because no jpdb api support for forqing and reviewing ,0
2757,12541183540.0,IssuesEvent,2020-06-05 11:50:58,input-output-hk/cardano-node,https://api.github.com/repos/input-output-hk/cardano-node,opened,[QA] - Create complex transactions,e2e automation,"- 1 input, 5 outputs
- 15 inputs, 35 outputs
- 100 inputs, 200 outputs !? ",1.0,"[QA] - Create complex transactions - - 1 input, 5 outputs
- 15 inputs, 35 outputs
- 100 inputs, 200 outputs !? ",1, create complex transactions input outputs inputs outputs inputs outputs ,1
372281,25992702231.0,IssuesEvent,2022-12-20 09:03:27,zkBob/zeropool-relayer,https://api.github.com/repos/zkBob/zeropool-relayer,closed,List of relayer endpoints in README outdated,documentation,Consider either to update the list of endpoint supported by the relayer or remove it from README.md file at all.,1.0,List of relayer endpoints in README outdated - Consider either to update the list of endpoint supported by the relayer or remove it from README.md file at all.,0,list of relayer endpoints in readme outdated consider either to update the list of endpoint supported by the relayer or remove it from readme md file at all ,0
162956,13906512739.0,IssuesEvent,2020-10-20 11:22:45,Dirnei/node-red-contrib-zigbee2mqtt-devices,https://api.github.com/repos/Dirnei/node-red-contrib-zigbee2mqtt-devices,opened,Node help text missing or unclear for some nodes,documentation,"# Override nodes
Override-state, override-brightness, override-temperature,
override color could use a better help text. Currently, it's only a one-liner.
I don't fully understand what they do - so it does not make sense for me to write it.
## Ideas
What it does:
Overrides the state of a payload...?
How to use it:
Button:ON ----> override:OFF ---> Lamp:GoesToOFF
# Other nodes
**scene-selector:** Description is not enough.
**climate-sensor:** Description is not enough.
**generic-lamp:** Description is not enough. I don't understand what the node is for.
**button-switch:** Description is not enough.
**device-status:** What's the message. Is it really every message? So it just listens to all the messages for lets say Lamp X? What's the generic MQTT Device sitch for?",1.0,"Node help text missing or unclear for some nodes - # Override nodes
Override-state, override-brightness, override-temperature,
override color could use a better help text. Currently, it's only a one-liner.
I don't fully understand what they do - so it does not make sense for me to write it.
## Ideas
What it does:
Overrides the state of a payload...?
How to use it:
Button:ON ----> override:OFF ---> Lamp:GoesToOFF
# Other nodes
**scene-selector:** Description is not enough.
**climate-sensor:** Description is not enough.
**generic-lamp:** Description is not enough. I don't understand what the node is for.
**button-switch:** Description is not enough.
**device-status:** What's the message. Is it really every message? So it just listens to all the messages for lets say Lamp X? What's the generic MQTT Device sitch for?",0,node help text missing or unclear for some nodes override nodes override state override brightness override temperature override color could use a better help text currently it s only a one liner i don t fully understand what they do so it does not make sense for me to write it ideas what it does overrides the state of a payload how to use it button on override off lamp goestooff other nodes scene selector description is not enough climate sensor description is not enough generic lamp description is not enough i don t understand what the node is for button switch description is not enough device status what s the message is it really every message so it just listens to all the messages for lets say lamp x what s the generic mqtt device sitch for ,0
1487,10194933281.0,IssuesEvent,2019-08-12 16:51:01,DoESLiverpool/somebody-should,https://api.github.com/repos/DoESLiverpool/somebody-should,opened,Give mqtt.local a static IP,2 - Should DoES System: Automation System: Network,With devices like the the [prototype room occupancy sensor/display](https://github.com/DoESLiverpool/somebody-should/issues/1185) talking to the IP address (they don't have mDNS support to talk to `mqtt.local`) of the MQTT broker it would be good if it was on a static IP so that a reboot doesn't take some of the services accidentally offliine.,1.0,Give mqtt.local a static IP - With devices like the the [prototype room occupancy sensor/display](https://github.com/DoESLiverpool/somebody-should/issues/1185) talking to the IP address (they don't have mDNS support to talk to `mqtt.local`) of the MQTT broker it would be good if it was on a static IP so that a reboot doesn't take some of the services accidentally offliine.,1,give mqtt local a static ip with devices like the the talking to the ip address they don t have mdns support to talk to mqtt local of the mqtt broker it would be good if it was on a static ip so that a reboot doesn t take some of the services accidentally offliine ,1
4772,17400932549.0,IssuesEvent,2021-08-02 19:33:07,improvecountry/ideas,https://api.github.com/repos/improvecountry/ideas,opened,Mobile notifications about food and medicines withdrawals,English Poland automation foreigners,"### Problem description
Lack of mobile notifications about food or medicines withdrawals. These news are being posted accordingly on [GIS website](https://www.gov.pl/web/gis/ostrzezenia) and [RDG website](https://rdg.ezdrowie.gov.pl/) or in social media like Facebook or Twitter.
### Solution proposal
It’s worth to consider to use [RSO mobile app](https://www.gov.pl/web/mswia/regionalny-system-ostrzegania) as an additional medium to inform citizens opportunely about possible risks for their health/lives. These notifications should be send in Polish and in English.
### Alternative solutions
Workaround:
GOV PL Info ([project description in English](https://krawczyk.in/gpi), [project description in Polish](https://krawczyk.in/gpi-pl)):
- [GPI GIS (unofficial Telegram Channel)](https://t.me/s/gpi_gis)
- [GPI GIF (unofficial Telegram Channel)](https://t.me/s/gpi_gif)
### Example solutions
_No response_
### Consent
I'm consent to list my GitHub username and link to my GitHub profile on the List of Ideators., I'm consent to list my name and surname on the List of Ideators.
### Regulations
- [X] I agree to follow Regulations of Improve Country",1.0,"Mobile notifications about food and medicines withdrawals - ### Problem description
Lack of mobile notifications about food or medicines withdrawals. These news are being posted accordingly on [GIS website](https://www.gov.pl/web/gis/ostrzezenia) and [RDG website](https://rdg.ezdrowie.gov.pl/) or in social media like Facebook or Twitter.
### Solution proposal
It’s worth to consider to use [RSO mobile app](https://www.gov.pl/web/mswia/regionalny-system-ostrzegania) as an additional medium to inform citizens opportunely about possible risks for their health/lives. These notifications should be send in Polish and in English.
### Alternative solutions
Workaround:
GOV PL Info ([project description in English](https://krawczyk.in/gpi), [project description in Polish](https://krawczyk.in/gpi-pl)):
- [GPI GIS (unofficial Telegram Channel)](https://t.me/s/gpi_gis)
- [GPI GIF (unofficial Telegram Channel)](https://t.me/s/gpi_gif)
### Example solutions
_No response_
### Consent
I'm consent to list my GitHub username and link to my GitHub profile on the List of Ideators., I'm consent to list my name and surname on the List of Ideators.
### Regulations
- [X] I agree to follow Regulations of Improve Country",1,mobile notifications about food and medicines withdrawals problem description lack of mobile notifications about food or medicines withdrawals these news are being posted accordingly on and or in social media like facebook or twitter solution proposal it’s worth to consider to use as an additional medium to inform citizens opportunely about possible risks for their health lives these notifications should be send in polish and in english alternative solutions workaround gov pl info example solutions no response consent i m consent to list my github username and link to my github profile on the list of ideators i m consent to list my name and surname on the list of ideators regulations i agree to follow regulations of improve country,1
6949,24056799067.0,IssuesEvent,2022-09-16 17:45:47,mlcommons/ck,https://api.github.com/repos/mlcommons/ck,closed,[CM scripts] download Git repos without history,enhancement cm-script-automation cm-mlperf,"As we discussed today, the total size of MLPerf inference repo is 440MB. However, without history, it's only 5MB.
The idea is to add an ENV (maybe CM_SKIP_GIT_HISTORY) to skip history in scripts when pulling such repos?
It can be used in ""cm pull repo"" and in ""cm run script"" ... Can discuss it further ...
",1.0,"[CM scripts] download Git repos without history - As we discussed today, the total size of MLPerf inference repo is 440MB. However, without history, it's only 5MB.
The idea is to add an ENV (maybe CM_SKIP_GIT_HISTORY) to skip history in scripts when pulling such repos?
It can be used in ""cm pull repo"" and in ""cm run script"" ... Can discuss it further ...
",1, download git repos without history as we discussed today the total size of mlperf inference repo is however without history it s only the idea is to add an env maybe cm skip git history to skip history in scripts when pulling such repos it can be used in cm pull repo and in cm run script can discuss it further ,1
3568,13995460085.0,IssuesEvent,2020-10-28 03:20:24,domoticafacilconjota/capitulos,https://api.github.com/repos/domoticafacilconjota/capitulos,opened,[AtoNodeRED]Control de Luz de la salita,Automation a Node RED,"**Código de la automatización**
```
- id: '1602283635834'
alias: Encender / Apagar Manual
description: Desactiva las automatizaciones por movimientos y pasa el sistema a
manual.
trigger:
- platform: device
domain: mqtt
device_id: d0c248fc0a7311eb8ac4bd3e4b327c07
type: action
subtype: single
discovery_id: 0x00158d000450b798 action_single
condition: []
action:
- type: toggle
device_id: d21be7620a7311eba8b1978a04c92df0
entity_id: light.lampara_light
domain: light
- service: automation.turn_off
data: {}
entity_id: automation.centinela
- service: automation.turn_off
data: {}
entity_id: automation.turn_the_light_on_when_motion_is_detected
mode: single
- id: '1602285296839'
alias: Enciende al atardecer sin hay movimiento
description: Enciende la luz por la tarde cuando hay movimiento
trigger:
- platform: device
domain: binary_sensor
entity_id: binary_sensor.movimiento_occupancy
device_id: a72089e20a7311eb9994e3b6f09595af
type: motion
for:
hours: 0
minutes: 0
seconds: 0
condition:
- condition: time
after: '16:30:00'
before: '23:30:00'
- condition: and
conditions:
- condition: sun
after: sunset
action:
- type: turn_on
device_id: d21be7620a7311eba8b1978a04c92df0
entity_id: light.lampara_light
domain: light
flash: short
brightness_pct: 100
mode: single
- id: '1602302233727'
alias: Centinela
description: Observa si hay movimiento.
trigger:
- platform: time_pattern
minutes: /3
condition:
- type: is_no_motion
condition: device
device_id: a72089e20a7311eb9994e3b6f09595af
entity_id: binary_sensor.movimiento_occupancy
domain: binary_sensor
- condition: and
conditions:
- condition: device
type: is_on
device_id: d21be7620a7311eba8b1978a04c92df0
entity_id: light.lampara_light
domain: light
action:
- service: light.turn_off
data: {}
entity_id: light.lampara_light
mode: single
- id: '1602305727370'
alias: Desactiva y Activa
description: apaga la luz y desactiva las actualizaciones y las activa para iniciar
el ciclo.
trigger:
- platform: time
at: '23:45:00'
- platform: time
at: '16:00:00'
condition: []
action:
- type: turn_off
device_id: d21be7620a7311eba8b1978a04c92df0
entity_id: light.lampara_light
domain: light
- service: automation.toggle
data: {}
entity_id: automation.centinela
- service: automation.toggle
data: {}
entity_id: automation.turn_the_light_on_when_motion_is_detected
mode: single
```
**Explicación de lo que hace actualmente la automatización**
La automiatizacion Enciende la luz de la sala , si el sol se entra y hay movimiento en la sala entre las 16 y las 23:45,
la automatizacion Centinela monitorea cada 10 minutos si hay moviemto en la sala, de no haberlo apaga la luz. La luz se enciende nuevamente si hay movimiento. Estos ciclos pueden dejar de ser automaticos y pasarlos a manual al dar clip a un interruptor inalambrico para ser controlado de manera manual. De no usar la funcion manual, al termino del horario 23:45 se ejecuta la automatizacion Activa/desactiva apgando la luz de estar encendida y desactivando las automatizaciones de encendido por movimiento y la centinela, para activarlas nuevamente a las 16 hrs, para iniciar nuevamente el ciclo.
**Notas del autor**
El hardware usado es u sensor de movimiento aqara modelo RTCGQ11LM, un interuptor inalambrico aqara modelo WXKG11LM y una lampara xiomi modelo ZNLDP12LM todas ellas con protocolo zigbee.",1.0,"[AtoNodeRED]Control de Luz de la salita - **Código de la automatización**
```
- id: '1602283635834'
alias: Encender / Apagar Manual
description: Desactiva las automatizaciones por movimientos y pasa el sistema a
manual.
trigger:
- platform: device
domain: mqtt
device_id: d0c248fc0a7311eb8ac4bd3e4b327c07
type: action
subtype: single
discovery_id: 0x00158d000450b798 action_single
condition: []
action:
- type: toggle
device_id: d21be7620a7311eba8b1978a04c92df0
entity_id: light.lampara_light
domain: light
- service: automation.turn_off
data: {}
entity_id: automation.centinela
- service: automation.turn_off
data: {}
entity_id: automation.turn_the_light_on_when_motion_is_detected
mode: single
- id: '1602285296839'
alias: Enciende al atardecer sin hay movimiento
description: Enciende la luz por la tarde cuando hay movimiento
trigger:
- platform: device
domain: binary_sensor
entity_id: binary_sensor.movimiento_occupancy
device_id: a72089e20a7311eb9994e3b6f09595af
type: motion
for:
hours: 0
minutes: 0
seconds: 0
condition:
- condition: time
after: '16:30:00'
before: '23:30:00'
- condition: and
conditions:
- condition: sun
after: sunset
action:
- type: turn_on
device_id: d21be7620a7311eba8b1978a04c92df0
entity_id: light.lampara_light
domain: light
flash: short
brightness_pct: 100
mode: single
- id: '1602302233727'
alias: Centinela
description: Observa si hay movimiento.
trigger:
- platform: time_pattern
minutes: /3
condition:
- type: is_no_motion
condition: device
device_id: a72089e20a7311eb9994e3b6f09595af
entity_id: binary_sensor.movimiento_occupancy
domain: binary_sensor
- condition: and
conditions:
- condition: device
type: is_on
device_id: d21be7620a7311eba8b1978a04c92df0
entity_id: light.lampara_light
domain: light
action:
- service: light.turn_off
data: {}
entity_id: light.lampara_light
mode: single
- id: '1602305727370'
alias: Desactiva y Activa
description: apaga la luz y desactiva las actualizaciones y las activa para iniciar
el ciclo.
trigger:
- platform: time
at: '23:45:00'
- platform: time
at: '16:00:00'
condition: []
action:
- type: turn_off
device_id: d21be7620a7311eba8b1978a04c92df0
entity_id: light.lampara_light
domain: light
- service: automation.toggle
data: {}
entity_id: automation.centinela
- service: automation.toggle
data: {}
entity_id: automation.turn_the_light_on_when_motion_is_detected
mode: single
```
**Explicación de lo que hace actualmente la automatización**
La automiatizacion Enciende la luz de la sala , si el sol se entra y hay movimiento en la sala entre las 16 y las 23:45,
la automatizacion Centinela monitorea cada 10 minutos si hay moviemto en la sala, de no haberlo apaga la luz. La luz se enciende nuevamente si hay movimiento. Estos ciclos pueden dejar de ser automaticos y pasarlos a manual al dar clip a un interruptor inalambrico para ser controlado de manera manual. De no usar la funcion manual, al termino del horario 23:45 se ejecuta la automatizacion Activa/desactiva apgando la luz de estar encendida y desactivando las automatizaciones de encendido por movimiento y la centinela, para activarlas nuevamente a las 16 hrs, para iniciar nuevamente el ciclo.
**Notas del autor**
El hardware usado es u sensor de movimiento aqara modelo RTCGQ11LM, un interuptor inalambrico aqara modelo WXKG11LM y una lampara xiomi modelo ZNLDP12LM todas ellas con protocolo zigbee.",1, control de luz de la salita código de la automatización id alias encender apagar manual description desactiva las automatizaciones por movimientos y pasa el sistema a manual trigger platform device domain mqtt device id type action subtype single discovery id action single condition action type toggle device id entity id light lampara light domain light service automation turn off data entity id automation centinela service automation turn off data entity id automation turn the light on when motion is detected mode single id alias enciende al atardecer sin hay movimiento description enciende la luz por la tarde cuando hay movimiento trigger platform device domain binary sensor entity id binary sensor movimiento occupancy device id type motion for hours minutes seconds condition condition time after before condition and conditions condition sun after sunset action type turn on device id entity id light lampara light domain light flash short brightness pct mode single id alias centinela description observa si hay movimiento trigger platform time pattern minutes condition type is no motion condition device device id entity id binary sensor movimiento occupancy domain binary sensor condition and conditions condition device type is on device id entity id light lampara light domain light action service light turn off data entity id light lampara light mode single id alias desactiva y activa description apaga la luz y desactiva las actualizaciones y las activa para iniciar el ciclo trigger platform time at platform time at condition action type turn off device id entity id light lampara light domain light service automation toggle data entity id automation centinela service automation toggle data entity id automation turn the light on when motion is detected mode single explicación de lo que hace actualmente la automatización la automiatizacion enciende la luz de la sala si el sol se entra y hay movimiento en la sala entre las y las la automatizacion centinela monitorea cada minutos si hay moviemto en la sala de no haberlo apaga la luz la luz se enciende nuevamente si hay movimiento estos ciclos pueden dejar de ser automaticos y pasarlos a manual al dar clip a un interruptor inalambrico para ser controlado de manera manual de no usar la funcion manual al termino del horario se ejecuta la automatizacion activa desactiva apgando la luz de estar encendida y desactivando las automatizaciones de encendido por movimiento y la centinela para activarlas nuevamente a las hrs para iniciar nuevamente el ciclo notas del autor el hardware usado es u sensor de movimiento aqara modelo un interuptor inalambrico aqara modelo y una lampara xiomi modelo todas ellas con protocolo zigbee ,1
9105,27557787512.0,IssuesEvent,2023-03-07 19:21:11,PauloGasparSv/TestingHooks,https://api.github.com/repos/PauloGasparSv/TestingHooks,closed,New Test,Upwork Automation,Now testing with my handle outside of the teams but inside the `MARGELO_GH_USERNAMES` array,1.0,New Test - Now testing with my handle outside of the teams but inside the `MARGELO_GH_USERNAMES` array,1,new test now testing with my handle outside of the teams but inside the margelo gh usernames array,1
2570,12299505943.0,IssuesEvent,2020-05-11 12:30:32,wazuh/wazuh-qa,https://api.github.com/repos/wazuh/wazuh-qa,closed,Test update_interval parameter from vuln detector providers,automation core/vuln detector,"Hello team,
We want to test the attribute `update_interval` of Vulnerability detector feeds.
This test is related to #661
Regards.",1.0,"Test update_interval parameter from vuln detector providers - Hello team,
We want to test the attribute `update_interval` of Vulnerability detector feeds.
This test is related to #661
Regards.",1,test update interval parameter from vuln detector providers hello team we want to test the attribute update interval of vulnerability detector feeds this test is related to regards ,1
119400,15528674243.0,IssuesEvent,2021-03-13 12:01:32,racklet/racklet,https://api.github.com/repos/racklet/racklet,opened,RFC 2: TUFBoot,kind/design priority/important-longterm subproject/tufboot,"Describe in the second RFC how we'd like to handle the boot process of the bare metal compute using TUF, libgitops, LinuxBoot, and related technologies.",1.0,"RFC 2: TUFBoot - Describe in the second RFC how we'd like to handle the boot process of the bare metal compute using TUF, libgitops, LinuxBoot, and related technologies.",0,rfc tufboot describe in the second rfc how we d like to handle the boot process of the bare metal compute using tuf libgitops linuxboot and related technologies ,0
120522,15774057663.0,IssuesEvent,2021-04-01 00:22:01,hackforla/HomeUniteUs,https://api.github.com/repos/hackforla/HomeUniteUs,closed,Audit Calendaring: Case Worker Hi Fid Prototype 2.0,Design System UI/UX,"### Overview
We need to audit 'Calendaring: Case Worker Hi Fid Prototype 2.0' in order to create a coherent design system for HUU.
### Action Items
- [x] Meet with Julia to over audit process
- [x] Audit in progress
- [x] Review existing screens
- [x] Take notes of current styles (typography + color)
- [x] Take notes of current styles (components - buttons & icons)
- [x] List out proposed changes
- [x] Audit ready for review
- [x] Review with team
- [x] Document feedbacks from team
- [x] Iterate based on feedback
- [x] Ready for design system
### Resources
Figma: https://www.figma.com/file/BNWqZk8SHKbtN1nw8BB7VM/Current-HUU-Everything-Figma?node-id=631%3A9689",1.0,"Audit Calendaring: Case Worker Hi Fid Prototype 2.0 - ### Overview
We need to audit 'Calendaring: Case Worker Hi Fid Prototype 2.0' in order to create a coherent design system for HUU.
### Action Items
- [x] Meet with Julia to over audit process
- [x] Audit in progress
- [x] Review existing screens
- [x] Take notes of current styles (typography + color)
- [x] Take notes of current styles (components - buttons & icons)
- [x] List out proposed changes
- [x] Audit ready for review
- [x] Review with team
- [x] Document feedbacks from team
- [x] Iterate based on feedback
- [x] Ready for design system
### Resources
Figma: https://www.figma.com/file/BNWqZk8SHKbtN1nw8BB7VM/Current-HUU-Everything-Figma?node-id=631%3A9689",0,audit calendaring case worker hi fid prototype overview we need to audit calendaring case worker hi fid prototype in order to create a coherent design system for huu action items meet with julia to over audit process audit in progress review existing screens take notes of current styles typography color take notes of current styles components buttons icons list out proposed changes audit ready for review review with team document feedbacks from team iterate based on feedback ready for design system resources figma ,0
66378,8920200871.0,IssuesEvent,2019-01-21 05:31:22,golang/go,https://api.github.com/repos/golang/go,closed,"flag: PrintDefaults claims configurability, but doesn't say how",Documentation NeedsFix,"https://golang.org/pkg/flag/#PrintDefaults says
> PrintDefaults prints, to standard error unless configured otherwise, ...
I couldn't see anywhere how to send its output anywhere else.
I dug through the source code and eventually made my way to `(*FlagSet).SetOutput`, but that was not at all obvious.
",1.0,"flag: PrintDefaults claims configurability, but doesn't say how - https://golang.org/pkg/flag/#PrintDefaults says
> PrintDefaults prints, to standard error unless configured otherwise, ...
I couldn't see anywhere how to send its output anywhere else.
I dug through the source code and eventually made my way to `(*FlagSet).SetOutput`, but that was not at all obvious.
",0,flag printdefaults claims configurability but doesn t say how says printdefaults prints to standard error unless configured otherwise i couldn t see anywhere how to send its output anywhere else i dug through the source code and eventually made my way to flagset setoutput but that was not at all obvious ,0
268461,20324532659.0,IssuesEvent,2022-02-18 03:40:15,frc2609/rapid-react-robot-code-2022,https://api.github.com/repos/frc2609/rapid-react-robot-code-2022,closed,Lay out steps for climbing and document this,documentation,"We need concrete list of steps on how to achieve climbing, and we need this documented. Discuss things such as how we want the robot to climb up, whether we will use swinging or not, whether we want to use the base of the robot as a counter weight, etc.",1.0,"Lay out steps for climbing and document this - We need concrete list of steps on how to achieve climbing, and we need this documented. Discuss things such as how we want the robot to climb up, whether we will use swinging or not, whether we want to use the base of the robot as a counter weight, etc.",0,lay out steps for climbing and document this we need concrete list of steps on how to achieve climbing and we need this documented discuss things such as how we want the robot to climb up whether we will use swinging or not whether we want to use the base of the robot as a counter weight etc ,0
48207,2994700564.0,IssuesEvent,2015-07-22 13:23:48,CIS-412-Spring-2015/frontend,https://api.github.com/repos/CIS-412-Spring-2015/frontend,closed,Make the Cancel Button go to In Progress page,Backlog enhancement Low Priority,"As a web developer I want to be able to provide functionality to the Cancel button represented on the footer panel of the views, so that when the user clicks on it they are warned and then if they proceed it takes them to the in progress page and cancels their endeavors.
- [x] Make a modal appear that warns user
- [x] The ""OK"" option takes them to the in progress page
- [x] The ""Cancel"" or ""x"" option leaves the user on the same page. ",1.0,"Make the Cancel Button go to In Progress page - As a web developer I want to be able to provide functionality to the Cancel button represented on the footer panel of the views, so that when the user clicks on it they are warned and then if they proceed it takes them to the in progress page and cancels their endeavors.
- [x] Make a modal appear that warns user
- [x] The ""OK"" option takes them to the in progress page
- [x] The ""Cancel"" or ""x"" option leaves the user on the same page. ",0,make the cancel button go to in progress page as a web developer i want to be able to provide functionality to the cancel button represented on the footer panel of the views so that when the user clicks on it they are warned and then if they proceed it takes them to the in progress page and cancels their endeavors make a modal appear that warns user the ok option takes them to the in progress page the cancel or x option leaves the user on the same page ,0
237356,26084101207.0,IssuesEvent,2022-12-25 21:25:19,samqws-marketing/walmartlabs-concord,https://api.github.com/repos/samqws-marketing/walmartlabs-concord,opened,CVE-2022-46175 (High) detected in multiple libraries,security vulnerability,"## CVE-2022-46175 - High Severity Vulnerability
Vulnerable Libraries - json5-1.0.1.tgz, json5-2.2.0.tgz, json5-0.5.1.tgz
Path to vulnerable library: /console2/node_modules/html-webpack-plugin/node_modules/json5/package.json,/console2/node_modules/tsconfig-paths/node_modules/json5/package.json,/console2/node_modules/resolve-url-loader/node_modules/json5/package.json,/console2/node_modules/postcss-loader/node_modules/json5/package.json,/console2/node_modules/babel-loader/node_modules/json5/package.json,/console2/node_modules/mini-css-extract-plugin/node_modules/json5/package.json,/console2/node_modules/webpack/node_modules/json5/package.json
Path to vulnerable library: /console2/node_modules/babel-register/node_modules/json5/package.json,/console2/node_modules/babel-cli/node_modules/json5/package.json
JSON5 is an extension to the popular JSON file format that aims to be easier to write and maintain by hand (e.g. for config files). The `parse` method of the JSON5 library before and including version `2.2.1` does not restrict parsing of keys named `__proto__`, allowing specially crafted strings to pollute the prototype of the resulting object. This vulnerability pollutes the prototype of the object returned by `JSON5.parse` and not the global Object prototype, which is the commonly understood definition of Prototype Pollution. However, polluting the prototype of a single object can have significant security impact for an application if the object is later used in trusted operations. This vulnerability could allow an attacker to set arbitrary and unexpected keys on the object returned from `JSON5.parse`. The actual impact will depend on how applications utilize the returned object and how they filter unwanted keys, but could include denial of service, cross-site scripting, elevation of privilege, and in extreme cases, remote code execution. `JSON5.parse` should restrict parsing of `__proto__` keys when parsing JSON strings to objects. As a point of reference, the `JSON.parse` method included in JavaScript ignores `__proto__` keys. Simply changing `JSON5.parse` to `JSON.parse` in the examples above mitigates this vulnerability. This vulnerability is patched in json5 version 2.2.2 and later.
Path to vulnerable library: /console2/node_modules/html-webpack-plugin/node_modules/json5/package.json,/console2/node_modules/tsconfig-paths/node_modules/json5/package.json,/console2/node_modules/resolve-url-loader/node_modules/json5/package.json,/console2/node_modules/postcss-loader/node_modules/json5/package.json,/console2/node_modules/babel-loader/node_modules/json5/package.json,/console2/node_modules/mini-css-extract-plugin/node_modules/json5/package.json,/console2/node_modules/webpack/node_modules/json5/package.json
Path to vulnerable library: /console2/node_modules/babel-register/node_modules/json5/package.json,/console2/node_modules/babel-cli/node_modules/json5/package.json
JSON5 is an extension to the popular JSON file format that aims to be easier to write and maintain by hand (e.g. for config files). The `parse` method of the JSON5 library before and including version `2.2.1` does not restrict parsing of keys named `__proto__`, allowing specially crafted strings to pollute the prototype of the resulting object. This vulnerability pollutes the prototype of the object returned by `JSON5.parse` and not the global Object prototype, which is the commonly understood definition of Prototype Pollution. However, polluting the prototype of a single object can have significant security impact for an application if the object is later used in trusted operations. This vulnerability could allow an attacker to set arbitrary and unexpected keys on the object returned from `JSON5.parse`. The actual impact will depend on how applications utilize the returned object and how they filter unwanted keys, but could include denial of service, cross-site scripting, elevation of privilege, and in extreme cases, remote code execution. `JSON5.parse` should restrict parsing of `__proto__` keys when parsing JSON strings to objects. As a point of reference, the `JSON.parse` method included in JavaScript ignores `__proto__` keys. Simply changing `JSON5.parse` to `JSON.parse` in the examples above mitigates this vulnerability. This vulnerability is patched in json5 version 2.2.2 and later.
",0,cve high detected in multiple libraries cve high severity vulnerability vulnerable libraries tgz tgz tgz tgz json for humans library home page a href path to dependency file package json path to vulnerable library node modules html webpack plugin node modules package json node modules tsconfig paths node modules package json node modules resolve url loader node modules package json node modules postcss loader node modules package json node modules babel loader node modules package json node modules mini css extract plugin node modules package json node modules webpack node modules package json dependency hierarchy react scripts tgz root library babel loader tgz loader utils tgz x tgz vulnerable library tgz json for humans library home page a href path to dependency file package json path to vulnerable library node modules package json dependency hierarchy react scripts tgz root library core tgz x tgz vulnerable library tgz json for the era library home page a href path to dependency file package json path to vulnerable library node modules babel register node modules package json node modules babel cli node modules package json dependency hierarchy babel cli tgz root library babel core tgz x tgz vulnerable library found in head commit a href found in base branch master vulnerability details is an extension to the popular json file format that aims to be easier to write and maintain by hand e g for config files the parse method of the library before and including version does not restrict parsing of keys named proto allowing specially crafted strings to pollute the prototype of the resulting object this vulnerability pollutes the prototype of the object returned by parse and not the global object prototype which is the commonly understood definition of prototype pollution however polluting the prototype of a single object can have significant security impact for an application if the object is later used in trusted operations this vulnerability could allow an attacker to set arbitrary and unexpected keys on the object returned from parse the actual impact will depend on how applications utilize the returned object and how they filter unwanted keys but could include denial of service cross site scripting elevation of privilege and in extreme cases remote code execution parse should restrict parsing of proto keys when parsing json strings to objects as a point of reference the json parse method included in javascript ignores proto keys simply changing parse to json parse in the examples above mitigates this vulnerability this vulnerability is patched in version and later publish date url a href cvss score details base score metrics exploitability metrics attack vector network attack complexity high privileges required low user interaction none scope unchanged impact metrics confidentiality impact high integrity impact low availability impact high for more information on scores click a href suggested fix type upgrade version origin a href release date fix resolution ,0
141112,18945391067.0,IssuesEvent,2021-11-18 09:37:40,MicrosoftDocs/microsoft-365-docs,https://api.github.com/repos/MicrosoftDocs/microsoft-365-docs,closed,Run Tests: DKIM button lead nowhere useful,security Defender for Office 365 doc-enhancement writer-input-required,"There is a button on this document labelled ""Run Tests: DKIM"" which point to https://aka.ms/diagdkim when I click and log into my tenant It places ""Diag: DKIM"" in a field under ""How can we help?"" heading on a pane to the right and then returns ""no solutions found""
Button goes no-where useful. Would be better if it lead to an actual DKIM setup testing form.
---
#### Document Details
⚠ *Do not edit this section. It is required for docs.microsoft.com ➟ GitHub issue linking.*
* ID: 249c059f-c3ce-a382-b99e-795a0c1f646a
* Version Independent ID: 5656c114-ae16-6194-cd9f-353b33f4a1e1
* Content: [How to use DKIM for email in your custom domain - Office 365](https://docs.microsoft.com/en-us/microsoft-365/security/office-365-security/use-dkim-to-validate-outbound-email?view=o365-worldwide)
* Content Source: [microsoft-365/security/office-365-security/use-dkim-to-validate-outbound-email.md](https://github.com/MicrosoftDocs/microsoft-365-docs/blob/public/microsoft-365/security/office-365-security/use-dkim-to-validate-outbound-email.md)
* Product: **m365-security**
* Technology: **mdo**
* GitHub Login: @MSFTTracyP
* Microsoft Alias: **tracyp**",True,"Run Tests: DKIM button lead nowhere useful - There is a button on this document labelled ""Run Tests: DKIM"" which point to https://aka.ms/diagdkim when I click and log into my tenant It places ""Diag: DKIM"" in a field under ""How can we help?"" heading on a pane to the right and then returns ""no solutions found""
Button goes no-where useful. Would be better if it lead to an actual DKIM setup testing form.
---
#### Document Details
⚠ *Do not edit this section. It is required for docs.microsoft.com ➟ GitHub issue linking.*
* ID: 249c059f-c3ce-a382-b99e-795a0c1f646a
* Version Independent ID: 5656c114-ae16-6194-cd9f-353b33f4a1e1
* Content: [How to use DKIM for email in your custom domain - Office 365](https://docs.microsoft.com/en-us/microsoft-365/security/office-365-security/use-dkim-to-validate-outbound-email?view=o365-worldwide)
* Content Source: [microsoft-365/security/office-365-security/use-dkim-to-validate-outbound-email.md](https://github.com/MicrosoftDocs/microsoft-365-docs/blob/public/microsoft-365/security/office-365-security/use-dkim-to-validate-outbound-email.md)
* Product: **m365-security**
* Technology: **mdo**
* GitHub Login: @MSFTTracyP
* Microsoft Alias: **tracyp**",0,run tests dkim button lead nowhere useful there is a button on this document labelled run tests dkim which point to when i click and log into my tenant it places diag dkim in a field under how can we help heading on a pane to the right and then returns no solutions found button goes no where useful would be better if it lead to an actual dkim setup testing form document details ⚠ do not edit this section it is required for docs microsoft com ➟ github issue linking id version independent id content content source product security technology mdo github login msfttracyp microsoft alias tracyp ,0
8779,27172240289.0,IssuesEvent,2023-02-17 20:35:11,OneDrive/onedrive-api-docs,https://api.github.com/repos/OneDrive/onedrive-api-docs,closed,Beta pages API doesn't work if you have pages with custom content types,area:Pages Needs: Investigation automation:Closed,"#### Category
- [ ] Question
- [ ] Documentation issue
- [x] Bug
#### Expected or Desired Behavior
The API call to succeed and return pages.
#### Observed Behavior
You get the following error.
```JSON
{
""error"": {
""code"": ""generalException"",
""message"": ""General exception while processing"",
""innerError"": {
""request-id"": ""6041c48f-732f-4a0b-9849-0294bd9e637b"",
""date"": ""2019-11-25T17:00:43""
}
}
}
```
Tenant mag013qa2tolead
RID's
6041c48f-732f-4a0b-9849-0294bd9e637b (graph explorer)
3df05a8d-84e5-49f8-8809-1feff5127225 (dotnet sdk)
#### Steps to Reproduce
1. create a team or communication site
1. add custom content types to the site pages library
1. add pages using that new content type
1. GET `https://graph.microsoft.com/beta//pages`
Thanks for your help!",1.0,"Beta pages API doesn't work if you have pages with custom content types - #### Category
- [ ] Question
- [ ] Documentation issue
- [x] Bug
#### Expected or Desired Behavior
The API call to succeed and return pages.
#### Observed Behavior
You get the following error.
```JSON
{
""error"": {
""code"": ""generalException"",
""message"": ""General exception while processing"",
""innerError"": {
""request-id"": ""6041c48f-732f-4a0b-9849-0294bd9e637b"",
""date"": ""2019-11-25T17:00:43""
}
}
}
```
Tenant mag013qa2tolead
RID's
6041c48f-732f-4a0b-9849-0294bd9e637b (graph explorer)
3df05a8d-84e5-49f8-8809-1feff5127225 (dotnet sdk)
#### Steps to Reproduce
1. create a team or communication site
1. add custom content types to the site pages library
1. add pages using that new content type
1. GET `https://graph.microsoft.com/beta//pages`
Thanks for your help!",1,beta pages api doesn t work if you have pages with custom content types category question documentation issue bug expected or desired behavior the api call to succeed and return pages observed behavior you get the following error json error code generalexception message general exception while processing innererror request id date tenant rid s graph explorer dotnet sdk steps to reproduce create a team or communication site add custom content types to the site pages library add pages using that new content type get thanks for your help ,1
35337,17027378683.0,IssuesEvent,2021-07-03 20:35:22,JuliaLang/julia,https://api.github.com/repos/JuliaLang/julia,closed,Improving broadcasting performance by working around recursion limits of inlining,broadcast performance,"Hi!
I've discovered that many (runtime) performance issues with broadcasting are caused by inlining not working with the highly recursive broadcasting code. It turns out that defining more methods can actually help here. Here is a piece of code you can evaluate in REPL to see that:
```julia
struct RecursiveInliningEnforcerA{T}
makeargs::T
end
struct RecursiveInliningEnforcerB{TMT,TMH,TT,TH,TF}
makeargs_tail::TMT
makeargs_head::TMH
headargs::TH
tailargs::TT
f::TF
end
for UB in [Any, RecursiveInliningEnforcerA]
@eval @inline function (bb::RecursiveInliningEnforcerB{TMT,TMH,TT,TH,TF})(args::Vararg{Any,N}) where {N,TMT,TMH<:$UB,TT,TH,TF}
args1 = bb.makeargs_head(args...)
a = bb.headargs(args1...)
b = bb.makeargs_tail(bb.tailargs(args1...)...)
return (bb.f(a...), b...)
end
end
for UB in [Any, RecursiveInliningEnforcerB]
@eval @inline (a::RecursiveInliningEnforcerA{TTA})(head::TH, tail::Vararg{Any,N}) where {TTA<:$UB,TH,N} = (head, a.makeargs(tail...)...)
end
@inline function Broadcast.make_makeargs(makeargs_tail::TT, t::Tuple) where TT
return RecursiveInliningEnforcerA(Broadcast.make_makeargs(makeargs_tail, Base.tail(t)))
end
function Broadcast.make_makeargs(makeargs_tail, t::Tuple{<:Broadcast.Broadcasted, Vararg{Any}})
bc = t[1]
# c.f. the same expression in the function on leaf nodes above. Here
# we recurse into siblings in the broadcast tree.
let makeargs_tail = Broadcast.make_makeargs(makeargs_tail, Base.tail(t)),
# Here we recurse into children. It would be valid to pass in makeargs_tail
# here, and not use it below. However, in that case, our recursion is no
# longer purely structural because we're building up one argument (the closure)
# while destructuing another.
makeargs_head = Broadcast.make_makeargs((args...)->args, bc.args),
f = bc.f
# Create two functions, one that splits of the first length(bc.args)
# elements from the tuple and one that yields the remaining arguments.
# N.B. We can't call headargs on `args...` directly because
# args is flattened (i.e. our children have not been evaluated
# yet).
headargs, tailargs = Broadcast.make_headargs(bc.args), Broadcast.make_tailargs(bc.args)
return RecursiveInliningEnforcerB(makeargs_tail, makeargs_head, headargs, tailargs, f)
end
end
```
This effectively duplicates these two functions: https://github.com/JuliaLang/julia/blob/abbb220b89ebcec87efd9fbf6c0ccae4f2a3ef4a/base/broadcast.jl#L380-L384 and https://github.com/JuliaLang/julia/blob/abbb220b89ebcec87efd9fbf6c0ccae4f2a3ef4a/base/broadcast.jl#L361 for different argument types.
It turns out that it's sufficient to fix the following issues:
https://github.com/JuliaArrays/StaticArrays.jl/issues/560
https://github.com/JuliaArrays/StaticArrays.jl/issues/682
https://github.com/JuliaArrays/StaticArrays.jl/issues/609
https://github.com/JuliaArrays/StaticArrays.jl/issues/797
What do you think about it?",True,"Improving broadcasting performance by working around recursion limits of inlining - Hi!
I've discovered that many (runtime) performance issues with broadcasting are caused by inlining not working with the highly recursive broadcasting code. It turns out that defining more methods can actually help here. Here is a piece of code you can evaluate in REPL to see that:
```julia
struct RecursiveInliningEnforcerA{T}
makeargs::T
end
struct RecursiveInliningEnforcerB{TMT,TMH,TT,TH,TF}
makeargs_tail::TMT
makeargs_head::TMH
headargs::TH
tailargs::TT
f::TF
end
for UB in [Any, RecursiveInliningEnforcerA]
@eval @inline function (bb::RecursiveInliningEnforcerB{TMT,TMH,TT,TH,TF})(args::Vararg{Any,N}) where {N,TMT,TMH<:$UB,TT,TH,TF}
args1 = bb.makeargs_head(args...)
a = bb.headargs(args1...)
b = bb.makeargs_tail(bb.tailargs(args1...)...)
return (bb.f(a...), b...)
end
end
for UB in [Any, RecursiveInliningEnforcerB]
@eval @inline (a::RecursiveInliningEnforcerA{TTA})(head::TH, tail::Vararg{Any,N}) where {TTA<:$UB,TH,N} = (head, a.makeargs(tail...)...)
end
@inline function Broadcast.make_makeargs(makeargs_tail::TT, t::Tuple) where TT
return RecursiveInliningEnforcerA(Broadcast.make_makeargs(makeargs_tail, Base.tail(t)))
end
function Broadcast.make_makeargs(makeargs_tail, t::Tuple{<:Broadcast.Broadcasted, Vararg{Any}})
bc = t[1]
# c.f. the same expression in the function on leaf nodes above. Here
# we recurse into siblings in the broadcast tree.
let makeargs_tail = Broadcast.make_makeargs(makeargs_tail, Base.tail(t)),
# Here we recurse into children. It would be valid to pass in makeargs_tail
# here, and not use it below. However, in that case, our recursion is no
# longer purely structural because we're building up one argument (the closure)
# while destructuing another.
makeargs_head = Broadcast.make_makeargs((args...)->args, bc.args),
f = bc.f
# Create two functions, one that splits of the first length(bc.args)
# elements from the tuple and one that yields the remaining arguments.
# N.B. We can't call headargs on `args...` directly because
# args is flattened (i.e. our children have not been evaluated
# yet).
headargs, tailargs = Broadcast.make_headargs(bc.args), Broadcast.make_tailargs(bc.args)
return RecursiveInliningEnforcerB(makeargs_tail, makeargs_head, headargs, tailargs, f)
end
end
```
This effectively duplicates these two functions: https://github.com/JuliaLang/julia/blob/abbb220b89ebcec87efd9fbf6c0ccae4f2a3ef4a/base/broadcast.jl#L380-L384 and https://github.com/JuliaLang/julia/blob/abbb220b89ebcec87efd9fbf6c0ccae4f2a3ef4a/base/broadcast.jl#L361 for different argument types.
It turns out that it's sufficient to fix the following issues:
https://github.com/JuliaArrays/StaticArrays.jl/issues/560
https://github.com/JuliaArrays/StaticArrays.jl/issues/682
https://github.com/JuliaArrays/StaticArrays.jl/issues/609
https://github.com/JuliaArrays/StaticArrays.jl/issues/797
What do you think about it?",0,improving broadcasting performance by working around recursion limits of inlining hi i ve discovered that many runtime performance issues with broadcasting are caused by inlining not working with the highly recursive broadcasting code it turns out that defining more methods can actually help here here is a piece of code you can evaluate in repl to see that julia struct recursiveinliningenforcera t makeargs t end struct recursiveinliningenforcerb tmt tmh tt th tf makeargs tail tmt makeargs head tmh headargs th tailargs tt f tf end for ub in eval inline function bb recursiveinliningenforcerb tmt tmh tt th tf args vararg any n where n tmt tmh ub tt th tf bb makeargs head args a bb headargs b bb makeargs tail bb tailargs return bb f a b end end for ub in eval inline a recursiveinliningenforcera tta head th tail vararg any n where tta ub th n head a makeargs tail end inline function broadcast make makeargs makeargs tail tt t tuple where tt return recursiveinliningenforcera broadcast make makeargs makeargs tail base tail t end function broadcast make makeargs makeargs tail t tuple broadcast broadcasted vararg any bc t c f the same expression in the function on leaf nodes above here we recurse into siblings in the broadcast tree let makeargs tail broadcast make makeargs makeargs tail base tail t here we recurse into children it would be valid to pass in makeargs tail here and not use it below however in that case our recursion is no longer purely structural because we re building up one argument the closure while destructuing another makeargs head broadcast make makeargs args args bc args f bc f create two functions one that splits of the first length bc args elements from the tuple and one that yields the remaining arguments n b we can t call headargs on args directly because args is flattened i e our children have not been evaluated yet headargs tailargs broadcast make headargs bc args broadcast make tailargs bc args return recursiveinliningenforcerb makeargs tail makeargs head headargs tailargs f end end this effectively duplicates these two functions and for different argument types it turns out that it s sufficient to fix the following issues what do you think about it ,0
173392,13399601685.0,IssuesEvent,2020-09-03 14:42:11,ultimate-pa/ultimate,https://api.github.com/repos/ultimate-pa/ultimate,opened,Wrong specification in rtinconsistency_test36,ReqAnalyzer test framework,"Test specification should be as follows ...
`// #TestSpec: rt-inconsistent:; vacuous:req1; inconsistent:; results:-1`
Maybe move the file into vacuity folder.

",1.0,"Wrong specification in rtinconsistency_test36 - Test specification should be as follows ...
`// #TestSpec: rt-inconsistent:; vacuous:req1; inconsistent:; results:-1`
Maybe move the file into vacuity folder.

",0,wrong specification in rtinconsistency test specification should be as follows testspec rt inconsistent vacuous inconsistent results maybe move the file into vacuity folder ,0
800,8149134483.0,IssuesEvent,2018-08-22 08:40:37,PierreRambaud/PrestaShop,https://api.github.com/repos/PierreRambaud/PrestaShop,closed,[BOOM-3849] css & js should use CDN when activated,1.7.2.2 1.7.4.0 Bug QA_automation Standard To Do Topwatchers,"
> This issue has been migrated from this Forge ticket [http://forge.prestashop.com/browse/BOOM-3849](http://forge.prestashop.com/browse/BOOM-3849)
- _**Reporter:**_ jmawad@bobbies.com
- _**Created at:**_ Fri, 15 Sep 2017 12:28:15 +0200
Apparently, even if you use mediaserver, the css and js in cache are not using CDN feature, as the images do.
You should write a Htaccess rule to handle the /themes/[theme]/assets/cache, and use media server url in header when calling the css
",1.0,"[BOOM-3849] css & js should use CDN when activated -
> This issue has been migrated from this Forge ticket [http://forge.prestashop.com/browse/BOOM-3849](http://forge.prestashop.com/browse/BOOM-3849)
- _**Reporter:**_ jmawad@bobbies.com
- _**Created at:**_ Fri, 15 Sep 2017 12:28:15 +0200
Apparently, even if you use mediaserver, the css and js in cache are not using CDN feature, as the images do.
You should write a Htaccess rule to handle the /themes/[theme]/assets/cache, and use media server url in header when calling the css
",1, css js should use cdn when activated this issue has been migrated from this forge ticket reporter jmawad bobbies com created at fri sep apparently even if you use mediaserver the css and js in cache are not using cdn feature as the images do you should write a htaccess rule to handle the themes theme assets cache and use media server url in header when calling the css ,1
40824,10583043295.0,IssuesEvent,2019-10-08 12:57:07,ocaml/opam,https://api.github.com/repos/ocaml/opam,closed,Solaris 10 patch command doesn't get file to patch,AREA: BUILD AREA: PORTABILITY,"After editing
opam-full-1.2.2-rc2/src_ext/Makefile
to remove suppression of recipe echoing:
...
if [ -d patches/cmdliner ]; then \
cd cmdliner && \
for p in ../patches/cmdliner/*.patch; do \
patch -p1 < $p; \
done; \
fi
Looks like a unified context diff.
File to patch:
That is, the patch command prompts the user.
opam-full-1.2.2-rc2/src_ext/patches/cmdliner/backport_pre_4_00_0.patch
diff -Naur cmdliner-0.9.7/src/cmdliner.ml cmdliner-0.9.7.patched/src/cmdliner.ml
--- cmdliner-0.9.7/src/cmdliner.ml 2015-02-06 11:33:44.000000000 +0100
+++ cmdliner-0.9.7.patched/src/cmdliner.ml 2015-02-18 23:04:04.000000000 +0100
...
See the man page for the Solaris 10 patch command.
http://docs.oracle.com/cd/E19253-01/816-5165/6mbb0m9n6/index.html
In particular, we are interested in the ""File Name Determination"" section of that document.
If no file operand is specified, patch performs the following steps to obtain a path name:
If the patch contains the strings **\* and - - -, patch strips components from the beginning of each path name (depending on the presence or value of the -p option), then tests for the existence of both files in the current directory ...
src/cmdliner.ml
src/cmdliner.ml
""Both"" files exist.
If both files exist, patch assumes that no path name can be obtained from this step ...
If no path name can be obtained by applying the previous steps, ... patch will write a prompt to standard output and request a file name interactively from standard input.
One possible solution is for the makefile to read the patch file, extracting the path name using the Linux patch command algorithm. Then feed that path name to the patch command explicitly.
Alan Feldstein
Cosmic Horizon
http://www.alanfeldstein.com
",1.0,"Solaris 10 patch command doesn't get file to patch - After editing
opam-full-1.2.2-rc2/src_ext/Makefile
to remove suppression of recipe echoing:
...
if [ -d patches/cmdliner ]; then \
cd cmdliner && \
for p in ../patches/cmdliner/*.patch; do \
patch -p1 < $p; \
done; \
fi
Looks like a unified context diff.
File to patch:
That is, the patch command prompts the user.
opam-full-1.2.2-rc2/src_ext/patches/cmdliner/backport_pre_4_00_0.patch
diff -Naur cmdliner-0.9.7/src/cmdliner.ml cmdliner-0.9.7.patched/src/cmdliner.ml
--- cmdliner-0.9.7/src/cmdliner.ml 2015-02-06 11:33:44.000000000 +0100
+++ cmdliner-0.9.7.patched/src/cmdliner.ml 2015-02-18 23:04:04.000000000 +0100
...
See the man page for the Solaris 10 patch command.
http://docs.oracle.com/cd/E19253-01/816-5165/6mbb0m9n6/index.html
In particular, we are interested in the ""File Name Determination"" section of that document.
If no file operand is specified, patch performs the following steps to obtain a path name:
If the patch contains the strings **\* and - - -, patch strips components from the beginning of each path name (depending on the presence or value of the -p option), then tests for the existence of both files in the current directory ...
src/cmdliner.ml
src/cmdliner.ml
""Both"" files exist.
If both files exist, patch assumes that no path name can be obtained from this step ...
If no path name can be obtained by applying the previous steps, ... patch will write a prompt to standard output and request a file name interactively from standard input.
One possible solution is for the makefile to read the patch file, extracting the path name using the Linux patch command algorithm. Then feed that path name to the patch command explicitly.
Alan Feldstein
Cosmic Horizon
http://www.alanfeldstein.com
",0,solaris patch command doesn t get file to patch after editing opam full src ext makefile to remove suppression of recipe echoing if then cd cmdliner for p in patches cmdliner patch do patch p done fi looks like a unified context diff file to patch that is the patch command prompts the user opam full src ext patches cmdliner backport pre patch diff naur cmdliner src cmdliner ml cmdliner patched src cmdliner ml cmdliner src cmdliner ml cmdliner patched src cmdliner ml see the man page for the solaris patch command in particular we are interested in the file name determination section of that document if no file operand is specified patch performs the following steps to obtain a path name if the patch contains the strings and patch strips components from the beginning of each path name depending on the presence or value of the p option then tests for the existence of both files in the current directory src cmdliner ml src cmdliner ml both files exist if both files exist patch assumes that no path name can be obtained from this step if no path name can be obtained by applying the previous steps patch will write a prompt to standard output and request a file name interactively from standard input one possible solution is for the makefile to read the patch file extracting the path name using the linux patch command algorithm then feed that path name to the patch command explicitly alan feldstein cosmic horizon ,0
2467,12059707107.0,IssuesEvent,2020-04-15 19:46:41,BCDevOps/OpenShift4-RollOut,https://api.github.com/repos/BCDevOps/OpenShift4-RollOut,opened,Upgrade Aporeto For Use with OCP4,tech/automation tech/networking,"Tasks:
- [x] Upgrade Aporeto Playbook for current Aporeto Release
- [x] Ensure Cordon/Evac Install and Upgrades work as expected
- [ ] Upgrade BCGov Network Policy Operator to a supported Operator-SDK version and Test
- [x] Write Host Protection Policies
- [ ] Move any remaining policy, enforcer, or namespace related policy configuration to NS Management Playbook
- [ ] Verify Host Protection is Enabled and Working properly with Assistance from DXC
I'll continue adding to this with further detail as things progress. As of today (April 15th 2020) I'm testing the playbook and making some additional changes to ensure host protection works
",1.0,"Upgrade Aporeto For Use with OCP4 - Tasks:
- [x] Upgrade Aporeto Playbook for current Aporeto Release
- [x] Ensure Cordon/Evac Install and Upgrades work as expected
- [ ] Upgrade BCGov Network Policy Operator to a supported Operator-SDK version and Test
- [x] Write Host Protection Policies
- [ ] Move any remaining policy, enforcer, or namespace related policy configuration to NS Management Playbook
- [ ] Verify Host Protection is Enabled and Working properly with Assistance from DXC
I'll continue adding to this with further detail as things progress. As of today (April 15th 2020) I'm testing the playbook and making some additional changes to ensure host protection works
",1,upgrade aporeto for use with tasks upgrade aporeto playbook for current aporeto release ensure cordon evac install and upgrades work as expected upgrade bcgov network policy operator to a supported operator sdk version and test write host protection policies move any remaining policy enforcer or namespace related policy configuration to ns management playbook verify host protection is enabled and working properly with assistance from dxc i ll continue adding to this with further detail as things progress as of today april i m testing the playbook and making some additional changes to ensure host protection works ,1
4221,15823876563.0,IssuesEvent,2021-04-06 01:53:41,DevExpress/testcafe,https://api.github.com/repos/DevExpress/testcafe,closed,Fix simulation of 'click' event in drag action,AREA: client FREQUENCY: level 1 STATE: Need improvement STATE: Stale SYSTEM: automations,"We should check that we simulate/skip simulation for `click` event in the end of `drag` action just like the browser does.
Take into account the following points:
1) prevention of previous events (mouseup, pointerup, touchup, etc)
2) if drag action is executed for really draggable elements or not
3) touch emulation specificity
4) different browsers specificity",1.0,"Fix simulation of 'click' event in drag action - We should check that we simulate/skip simulation for `click` event in the end of `drag` action just like the browser does.
Take into account the following points:
1) prevention of previous events (mouseup, pointerup, touchup, etc)
2) if drag action is executed for really draggable elements or not
3) touch emulation specificity
4) different browsers specificity",1,fix simulation of click event in drag action we should check that we simulate skip simulation for click event in the end of drag action just like the browser does take into account the following points prevention of previous events mouseup pointerup touchup etc if drag action is executed for really draggable elements or not touch emulation specificity different browsers specificity,1
45880,5756917367.0,IssuesEvent,2017-04-26 01:36:58,Microsoft/vscode,https://api.github.com/repos/Microsoft/vscode,opened,Test plan for Emmet features from extension using new APIs,testplan-item,"Test item for #21943
Complexity: 4
**Pre-req for testing**
- Clone the repo for the extension https://github.com/Microsoft/vscode-emmet
- Run `vsce package` to get the six
- Side load the extension by `code-insiders --install-extension emmet-0.0.1.vsix`
**Suggestion Items**
- As you type an abbreviation, the suggestion list should show the expanded abbreviation. Can be turned off by disabling `emmet.suggestExpandedAbbreviation`
- As you type an abbreviation, all possible abbreviations should show in the suggestion list. Can be turned off by disabling `emmet.suggestAbbreviations`
**Commands to test:**
- _Emmet 2.0: Expand abbreviation_
- The selected text or the text preceeding the cursor if no text is selected is taken as the abbreviation to expand.
- Works with html, xml, jade, slim, haml files and css,scss,sass, less, stylus files
- _Emmet 2.0: Wrap with abbreviation_
- The selected text or the current line if no text is selected, is wrapped with given abbreviation.
- Works with html, xml, jade, slim, haml files
- _Emmet 2.0: Remove Tag_
- The tag under the cursor is removed along with the corresponding opening/closing tag. Works with multiple cursors. Works only in html and xml files.
- _Emmet 2.0: Update Tag_
- The tag under the cursor is updated to the given tag. Works with multiple cursors. Works only in html and xml files.
- _Emmet 2.0: Go to Matching Pair_
- Cursor moves to the tag matching to the tag under cursor. Works with multiple cursors. Works only in html and xml files.
",1.0,"Test plan for Emmet features from extension using new APIs - Test item for #21943
Complexity: 4
**Pre-req for testing**
- Clone the repo for the extension https://github.com/Microsoft/vscode-emmet
- Run `vsce package` to get the six
- Side load the extension by `code-insiders --install-extension emmet-0.0.1.vsix`
**Suggestion Items**
- As you type an abbreviation, the suggestion list should show the expanded abbreviation. Can be turned off by disabling `emmet.suggestExpandedAbbreviation`
- As you type an abbreviation, all possible abbreviations should show in the suggestion list. Can be turned off by disabling `emmet.suggestAbbreviations`
**Commands to test:**
- _Emmet 2.0: Expand abbreviation_
- The selected text or the text preceeding the cursor if no text is selected is taken as the abbreviation to expand.
- Works with html, xml, jade, slim, haml files and css,scss,sass, less, stylus files
- _Emmet 2.0: Wrap with abbreviation_
- The selected text or the current line if no text is selected, is wrapped with given abbreviation.
- Works with html, xml, jade, slim, haml files
- _Emmet 2.0: Remove Tag_
- The tag under the cursor is removed along with the corresponding opening/closing tag. Works with multiple cursors. Works only in html and xml files.
- _Emmet 2.0: Update Tag_
- The tag under the cursor is updated to the given tag. Works with multiple cursors. Works only in html and xml files.
- _Emmet 2.0: Go to Matching Pair_
- Cursor moves to the tag matching to the tag under cursor. Works with multiple cursors. Works only in html and xml files.
",0,test plan for emmet features from extension using new apis test item for complexity pre req for testing clone the repo for the extension run vsce package to get the six side load the extension by code insiders install extension emmet vsix suggestion items as you type an abbreviation the suggestion list should show the expanded abbreviation can be turned off by disabling emmet suggestexpandedabbreviation as you type an abbreviation all possible abbreviations should show in the suggestion list can be turned off by disabling emmet suggestabbreviations commands to test emmet expand abbreviation the selected text or the text preceeding the cursor if no text is selected is taken as the abbreviation to expand works with html xml jade slim haml files and css scss sass less stylus files emmet wrap with abbreviation the selected text or the current line if no text is selected is wrapped with given abbreviation works with html xml jade slim haml files emmet remove tag the tag under the cursor is removed along with the corresponding opening closing tag works with multiple cursors works only in html and xml files emmet update tag the tag under the cursor is updated to the given tag works with multiple cursors works only in html and xml files emmet go to matching pair cursor moves to the tag matching to the tag under cursor works with multiple cursors works only in html and xml files ,0
7500,24991880294.0,IssuesEvent,2022-11-02 19:33:19,GoogleCloudPlatform/pubsec-declarative-toolkit,https://api.github.com/repos/GoogleCloudPlatform/pubsec-declarative-toolkit,opened,look at checking if there is a way to get workload AR vulnerability checks alongside the infrastructure vulnerability tab results already in SCC-P,developer-experience automation,"Michael will look at checking if there is a way to get workload AR vulnerability checks alongside the infrastructure vulnerability tab results already in SCC-P
Artifact Registry scanning of cloud build targeted container
SCC (non-premium has the vulnerabilities tab - but not compliance or threats
",1.0,"look at checking if there is a way to get workload AR vulnerability checks alongside the infrastructure vulnerability tab results already in SCC-P - Michael will look at checking if there is a way to get workload AR vulnerability checks alongside the infrastructure vulnerability tab results already in SCC-P
Artifact Registry scanning of cloud build targeted container
SCC (non-premium has the vulnerabilities tab - but not compliance or threats
",1,look at checking if there is a way to get workload ar vulnerability checks alongside the infrastructure vulnerability tab results already in scc p michael will look at checking if there is a way to get workload ar vulnerability checks alongside the infrastructure vulnerability tab results already in scc p artifact registry scanning of cloud build targeted container img width alt screen shot at pm src scc non premium has the vulnerabilities tab but not compliance or threats img width alt screen shot at pm src ,1
70427,30666492315.0,IssuesEvent,2023-07-25 18:41:58,cityofaustin/atd-data-tech,https://api.github.com/repos/cityofaustin/atd-data-tech,closed,Dev Team Mapping Sync,Type: Meeting Workgroup: DTS Service: Data Science Project: TPW Data Ecosystem Map,"### Objective
Meet with the expert dev team/owners to map our their knowledge base concerning data pipeline for our major applications and dashboards.
### Participants
Dev Team: John and Chai Data Science: Charlie, Rebekka and Kate
### Agenda
Add agenda here or create agenda from [this template](https://docs.google.com/document/d/1d_49KW5C_vSz8Bs50v-cxyIJuTNJMwMrh7ypcxRHgZI/edit#) and add link.
Working Miro Board - [Mapping Template](https://miro.com/app/board/uXjVM1BWe3I=/?share_link_id=270138594951)
------
- [x] Schedule meeting late July 2023
- [x] Optional: Schedule debrief
- [x] Meet and take notes
- [ ] Optional: Debrief with DTS team members
- [ ] Create resulting issues
",1.0,"Dev Team Mapping Sync - ### Objective
Meet with the expert dev team/owners to map our their knowledge base concerning data pipeline for our major applications and dashboards.
### Participants
Dev Team: John and Chai Data Science: Charlie, Rebekka and Kate
### Agenda
Add agenda here or create agenda from [this template](https://docs.google.com/document/d/1d_49KW5C_vSz8Bs50v-cxyIJuTNJMwMrh7ypcxRHgZI/edit#) and add link.
Working Miro Board - [Mapping Template](https://miro.com/app/board/uXjVM1BWe3I=/?share_link_id=270138594951)
------
- [x] Schedule meeting late July 2023
- [x] Optional: Schedule debrief
- [x] Meet and take notes
- [ ] Optional: Debrief with DTS team members
- [ ] Create resulting issues
",0,dev team mapping sync objective meet with the expert dev team owners to map our their knowledge base concerning data pipeline for our major applications and dashboards participants dev team john and chai data science charlie rebekka and kate agenda add agenda here or create agenda from and add link working miro board schedule meeting late july optional schedule debrief meet and take notes optional debrief with dts team members create resulting issues ,0
378488,26322867812.0,IssuesEvent,2023-01-10 02:14:38,apache/arrow,https://api.github.com/repos/apache/arrow,closed,[Documentation] Add PR template,Type: enhancement Component: Documentation,"### Describe the enhancement requested
See #15232
### Component(s)
Documentation",1.0,"[Documentation] Add PR template - ### Describe the enhancement requested
See #15232
### Component(s)
Documentation",0, add pr template describe the enhancement requested see component s documentation,0
9056,27437198039.0,IssuesEvent,2023-03-02 08:27:01,yugabyte/yugabyte-db,https://api.github.com/repos/yugabyte/yugabyte-db,reopened,[CDC] Data loss seen in snapshot + stream mode,kind/bug area/cdc priority/medium area/cdcsdk qa_automation,"Jira Link: [DB-4559](https://yugabyte.atlassian.net/browse/DB-4559)
### Description

All logs:
[All_Logs.zip](https://github.com/yugabyte/yugabyte-db/files/10312649/All_Logs.zip)
Few errors in connector logs:
```
org.yb.client.CDCErrorException: Server[0a46ce1e05b04b399bf2b97d325f3257] NETWORK_ERROR[code 8]: recvmsg error: Connection refused
at org.yb.client.TabletClient.dispatchCDCErrorOrReturnException(TabletClient.java:506)
at org.yb.client.TabletClient.decode(TabletClient.java:437)
at io.netty.handler.codec.ByteToMessageDecoder.decodeRemovalReentryProtection(ByteToMessageDecoder.java:510)
at io.netty.handler.codec.ReplayingDecoder.callDecode(ReplayingDecoder.java:366)
at io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:279)
at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:379)
at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:365)
at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:357)
at io.netty.handler.timeout.IdleStateHandler.channelRead(IdleStateHandler.java:286)
at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:379)
at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:365)
at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:357)
at io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1410)
at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:379)
at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:365)
at io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:919)
at io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:166)
at io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:722)
at io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:658)
at io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:584)
at io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:496)
at io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:986)
at io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74)
at java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)
at java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)
at java.base/java.lang.Thread.run(Thread.java:829)
```",1.0,"[CDC] Data loss seen in snapshot + stream mode - Jira Link: [DB-4559](https://yugabyte.atlassian.net/browse/DB-4559)
### Description

All logs:
[All_Logs.zip](https://github.com/yugabyte/yugabyte-db/files/10312649/All_Logs.zip)
Few errors in connector logs:
```
org.yb.client.CDCErrorException: Server[0a46ce1e05b04b399bf2b97d325f3257] NETWORK_ERROR[code 8]: recvmsg error: Connection refused
at org.yb.client.TabletClient.dispatchCDCErrorOrReturnException(TabletClient.java:506)
at org.yb.client.TabletClient.decode(TabletClient.java:437)
at io.netty.handler.codec.ByteToMessageDecoder.decodeRemovalReentryProtection(ByteToMessageDecoder.java:510)
at io.netty.handler.codec.ReplayingDecoder.callDecode(ReplayingDecoder.java:366)
at io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:279)
at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:379)
at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:365)
at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:357)
at io.netty.handler.timeout.IdleStateHandler.channelRead(IdleStateHandler.java:286)
at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:379)
at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:365)
at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:357)
at io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1410)
at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:379)
at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:365)
at io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:919)
at io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:166)
at io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:722)
at io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:658)
at io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:584)
at io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:496)
at io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:986)
at io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74)
at java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)
at java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)
at java.base/java.lang.Thread.run(Thread.java:829)
```",1, data loss seen in snapshot stream mode jira link description all logs few errors in connector logs org yb client cdcerrorexception server network error recvmsg error connection refused at org yb client tabletclient dispatchcdcerrororreturnexception tabletclient java at org yb client tabletclient decode tabletclient java at io netty handler codec bytetomessagedecoder decoderemovalreentryprotection bytetomessagedecoder java at io netty handler codec replayingdecoder calldecode replayingdecoder java at io netty handler codec bytetomessagedecoder channelread bytetomessagedecoder java at io netty channel abstractchannelhandlercontext invokechannelread abstractchannelhandlercontext java at io netty channel abstractchannelhandlercontext invokechannelread abstractchannelhandlercontext java at io netty channel abstractchannelhandlercontext firechannelread abstractchannelhandlercontext java at io netty handler timeout idlestatehandler channelread idlestatehandler java at io netty channel abstractchannelhandlercontext invokechannelread abstractchannelhandlercontext java at io netty channel abstractchannelhandlercontext invokechannelread abstractchannelhandlercontext java at io netty channel abstractchannelhandlercontext firechannelread abstractchannelhandlercontext java at io netty channel defaultchannelpipeline headcontext channelread defaultchannelpipeline java at io netty channel abstractchannelhandlercontext invokechannelread abstractchannelhandlercontext java at io netty channel abstractchannelhandlercontext invokechannelread abstractchannelhandlercontext java at io netty channel defaultchannelpipeline firechannelread defaultchannelpipeline java at io netty channel nio abstractniobytechannel niobyteunsafe read abstractniobytechannel java at io netty channel nio nioeventloop processselectedkey nioeventloop java at io netty channel nio nioeventloop processselectedkeysoptimized nioeventloop java at io netty channel nio nioeventloop processselectedkeys nioeventloop java at io netty channel nio nioeventloop run nioeventloop java at io netty util concurrent singlethreadeventexecutor run singlethreadeventexecutor java at io netty util internal threadexecutormap run threadexecutormap java at java base java util concurrent threadpoolexecutor runworker threadpoolexecutor java at java base java util concurrent threadpoolexecutor worker run threadpoolexecutor java at java base java lang thread run thread java ,1
3505,13879209498.0,IssuesEvent,2020-10-17 13:25:36,Cludch/HomeAssistant,https://api.github.com/repos/Cludch/HomeAssistant,closed,Automate office closet light,area: office automation lighting,The light in the office should turn on automatically if we are working and it turns dark outside.,1.0,Automate office closet light - The light in the office should turn on automatically if we are working and it turns dark outside.,1,automate office closet light the light in the office should turn on automatically if we are working and it turns dark outside ,1
408,6255706643.0,IssuesEvent,2017-07-14 08:02:41,vmware/harbor,https://api.github.com/repos/vmware/harbor,closed,project can not created when click ok button immediately,area/ui kind/automation-found kind/bug,"v1.1.2-294-g6ee631d
project can not created when click ok button immediately
",1.0,"project can not created when click ok button immediately - v1.1.2-294-g6ee631d
project can not created when click ok button immediately
",1,project can not created when click ok button immediately project can not created when click ok button immediately ,1
824858,31233211346.0,IssuesEvent,2023-08-20 00:29:36,zephyrproject-rtos/zephyr,https://api.github.com/repos/zephyrproject-rtos/zephyr,closed,ESP32-C3 with BLE setting CONFIG_MAC_BB_PD enabled doesn't build,bug priority: low platform: ESP32 Stale area: Bluetooth HCI,"**Describe the bug**
Zephyr fails to build when `CONFIG_MAC_BB_PD`, ""Power down MAC and baseband of Wi-Fi and Bluetooth when PHY is disabled"", is enabled.
**To Reproduce**
1. Append `CONFIG_MAC_BB_PD=y` to `prj.conf` in any bluetooth sample, e.g. `samples/bluetooth/beacon`.
2. Build for an ESP32-C3 board, e.g. `west build -b esp32c3_devkitm`
3. Link failure when building.
**Expected behavior**
Builds.
**Impact**
Can't use power down PHY feature.
**Logs and console output**
Cause of build failure:
```
[97/220] Building C object zephyr/modules/hal/espressif/zephyr/esp32c3/src/bt/esp_bt_adapter.c.obj
zephyr/modules/hal/espressif/zephyr/esp32c3/src/bt/esp_bt_adapter.c: In function 'esp_bt_controller_init':
zephyr/modules/hal/espressif/zephyr/esp32c3/src/bt/esp_bt_adapter.c:1022:13: warning: implicit declaration of function 'esp_register_mac_bb_pd_callback' [->
1022 | if (esp_register_mac_bb_pd_callback(btdm_mac_bb_power_down_cb) != 0) {
| ^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
zephyr/modules/hal/espressif/zephyr/esp32c3/src/bt/esp_bt_adapter.c:1027:13: warning: implicit declaration of function 'esp_register_mac_bb_pu_callback' [->
1027 | if (esp_register_mac_bb_pu_callback(btdm_mac_bb_power_up_cb) != 0) {
| ^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
zephyr/modules/hal/espressif/zephyr/esp32c3/src/bt/esp_bt_adapter.c:1189:9: warning: implicit declaration of function 'esp_unregister_mac_bb_pd_callback' [>
1189 | esp_unregister_mac_bb_pd_callback(btdm_mac_bb_power_down_cb);
| ^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
zephyr/modules/hal/espressif/zephyr/esp32c3/src/bt/esp_bt_adapter.c:1191:9: warning: implicit declaration of function 'esp_unregister_mac_bb_pu_callback' [>
1191 | esp_unregister_mac_bb_pu_callback(btdm_mac_bb_power_up_cb);
| ^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
```
And then a link failure from lacking the above undeclared function.
```
riscv32-esp-elf/bin/ld.bfd: zephyr/libzephyr.a(esp_bt_adapter.c.obj): in function `esp_bt_power_domain_on':
zephyr/modules/hal/espressif/zephyr/esp32c3/src/bt/esp_bt_adapter.c:385: undefined reference to `esp_register_mac_bb_pd_callback'
riscv32-esp-elf/bin/ld.bfd: zephyr/libzephyr.a(esp_bt_adapter.c.obj): in function `esp_bt_controller_init':
zephyr/modules/hal/espressif/zephyr/esp32c3/src/bt/esp_bt_adapter.c:1022: undefined reference to `esp_register_mac_bb_pu_callback'
riscv32-esp-elf/bin/ld.bfd: zephyr/modules/hal/espressif/zephyr/esp32c3/src/bt/esp_bt_adapter.c:1168: undefined reference to `esp_unregister_mac_bb_pd_callback'
riscv32-esp-elf/bin/ld.bfd: zephyr/modules/hal/espressif/zephyr/esp32c3/src/bt/esp_bt_adapter.c:1168: undefined reference to `esp_unregister_mac_bb_pu_callback'
```
**Environment (please complete the following information):**
- Linux
- Crosstool-NG esp-12.2.0_20230208
- Zephyr v3.4.0-rc1-198-g0ae7812f6b",1.0,"ESP32-C3 with BLE setting CONFIG_MAC_BB_PD enabled doesn't build - **Describe the bug**
Zephyr fails to build when `CONFIG_MAC_BB_PD`, ""Power down MAC and baseband of Wi-Fi and Bluetooth when PHY is disabled"", is enabled.
**To Reproduce**
1. Append `CONFIG_MAC_BB_PD=y` to `prj.conf` in any bluetooth sample, e.g. `samples/bluetooth/beacon`.
2. Build for an ESP32-C3 board, e.g. `west build -b esp32c3_devkitm`
3. Link failure when building.
**Expected behavior**
Builds.
**Impact**
Can't use power down PHY feature.
**Logs and console output**
Cause of build failure:
```
[97/220] Building C object zephyr/modules/hal/espressif/zephyr/esp32c3/src/bt/esp_bt_adapter.c.obj
zephyr/modules/hal/espressif/zephyr/esp32c3/src/bt/esp_bt_adapter.c: In function 'esp_bt_controller_init':
zephyr/modules/hal/espressif/zephyr/esp32c3/src/bt/esp_bt_adapter.c:1022:13: warning: implicit declaration of function 'esp_register_mac_bb_pd_callback' [->
1022 | if (esp_register_mac_bb_pd_callback(btdm_mac_bb_power_down_cb) != 0) {
| ^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
zephyr/modules/hal/espressif/zephyr/esp32c3/src/bt/esp_bt_adapter.c:1027:13: warning: implicit declaration of function 'esp_register_mac_bb_pu_callback' [->
1027 | if (esp_register_mac_bb_pu_callback(btdm_mac_bb_power_up_cb) != 0) {
| ^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
zephyr/modules/hal/espressif/zephyr/esp32c3/src/bt/esp_bt_adapter.c:1189:9: warning: implicit declaration of function 'esp_unregister_mac_bb_pd_callback' [>
1189 | esp_unregister_mac_bb_pd_callback(btdm_mac_bb_power_down_cb);
| ^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
zephyr/modules/hal/espressif/zephyr/esp32c3/src/bt/esp_bt_adapter.c:1191:9: warning: implicit declaration of function 'esp_unregister_mac_bb_pu_callback' [>
1191 | esp_unregister_mac_bb_pu_callback(btdm_mac_bb_power_up_cb);
| ^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
```
And then a link failure from lacking the above undeclared function.
```
riscv32-esp-elf/bin/ld.bfd: zephyr/libzephyr.a(esp_bt_adapter.c.obj): in function `esp_bt_power_domain_on':
zephyr/modules/hal/espressif/zephyr/esp32c3/src/bt/esp_bt_adapter.c:385: undefined reference to `esp_register_mac_bb_pd_callback'
riscv32-esp-elf/bin/ld.bfd: zephyr/libzephyr.a(esp_bt_adapter.c.obj): in function `esp_bt_controller_init':
zephyr/modules/hal/espressif/zephyr/esp32c3/src/bt/esp_bt_adapter.c:1022: undefined reference to `esp_register_mac_bb_pu_callback'
riscv32-esp-elf/bin/ld.bfd: zephyr/modules/hal/espressif/zephyr/esp32c3/src/bt/esp_bt_adapter.c:1168: undefined reference to `esp_unregister_mac_bb_pd_callback'
riscv32-esp-elf/bin/ld.bfd: zephyr/modules/hal/espressif/zephyr/esp32c3/src/bt/esp_bt_adapter.c:1168: undefined reference to `esp_unregister_mac_bb_pu_callback'
```
**Environment (please complete the following information):**
- Linux
- Crosstool-NG esp-12.2.0_20230208
- Zephyr v3.4.0-rc1-198-g0ae7812f6b",0, with ble setting config mac bb pd enabled doesn t build describe the bug zephyr fails to build when config mac bb pd power down mac and baseband of wi fi and bluetooth when phy is disabled is enabled to reproduce append config mac bb pd y to prj conf in any bluetooth sample e g samples bluetooth beacon build for an board e g west build b devkitm link failure when building expected behavior builds impact can t use power down phy feature logs and console output cause of build failure building c object zephyr modules hal espressif zephyr src bt esp bt adapter c obj zephyr modules hal espressif zephyr src bt esp bt adapter c in function esp bt controller init zephyr modules hal espressif zephyr src bt esp bt adapter c warning implicit declaration of function esp register mac bb pd callback if esp register mac bb pd callback btdm mac bb power down cb zephyr modules hal espressif zephyr src bt esp bt adapter c warning implicit declaration of function esp register mac bb pu callback if esp register mac bb pu callback btdm mac bb power up cb zephyr modules hal espressif zephyr src bt esp bt adapter c warning implicit declaration of function esp unregister mac bb pd callback esp unregister mac bb pd callback btdm mac bb power down cb zephyr modules hal espressif zephyr src bt esp bt adapter c warning implicit declaration of function esp unregister mac bb pu callback esp unregister mac bb pu callback btdm mac bb power up cb and then a link failure from lacking the above undeclared function esp elf bin ld bfd zephyr libzephyr a esp bt adapter c obj in function esp bt power domain on zephyr modules hal espressif zephyr src bt esp bt adapter c undefined reference to esp register mac bb pd callback esp elf bin ld bfd zephyr libzephyr a esp bt adapter c obj in function esp bt controller init zephyr modules hal espressif zephyr src bt esp bt adapter c undefined reference to esp register mac bb pu callback esp elf bin ld bfd zephyr modules hal espressif zephyr src bt esp bt adapter c undefined reference to esp unregister mac bb pd callback esp elf bin ld bfd zephyr modules hal espressif zephyr src bt esp bt adapter c undefined reference to esp unregister mac bb pu callback environment please complete the following information linux crosstool ng esp zephyr ,0
623,7588996106.0,IssuesEvent,2018-04-26 05:04:13,rancher/rancher,https://api.github.com/repos/rancher/rancher,closed,"API returns ""404 - Not Found"" error when trying to fetch container logs.",kind/bug setup/automation,"Server version - Build from master on Oct 10.
Following test cases which validates container logs fails with ""404 - Not Found"" error when trying to fetch container logs.
Manually when I try to fetch the logs of the container , I am able to get the container logs:
_______________________________ test_native_logs _______________________________
client = , socat_containers = None
native_name = 'native-test-728364', pull_images = None
```
def test_native_logs(client, socat_containers, native_name, pull_images):
docker_client = get_docker_client(host(client))
test_msg = 'LOGS_WORK'
docker_container = docker_client. \
create_container(NATIVE_TEST_IMAGE,
name=native_name,
tty=True,
stdin_open=True,
detach=True,
command=['/bin/bash', '-c', 'echo ' + test_msg])
rancher_container, _ = start_and_wait(client, docker_container,
docker_client,
native_name)
```
> ```
> found_msg = search_logs(rancher_container, test_msg)
> ```
cattlevalidationtest/core/test_native_docker.py:223:
---
cattlevalidationtest/core/test_native_docker.py:280: in search_logs
logs = container.logs()
.tox/py27/local/lib/python2.7/site-packages/gdapi.py:233: in
_args, *_kw)
.tox/py27/local/lib/python2.7/site-packages/gdapi.py:399: in action
return self._post(url, data=self._to_dict(_args, *_kw))
.tox/py27/local/lib/python2.7/site-packages/gdapi.py:62: in wrapped
return fn(_args, *_kw)
.tox/py27/local/lib/python2.7/site-packages/gdapi.py:273: in _post
self._error(r.text)
---
self =
text = '{""id"":""db6259a2-6bbf-4c2a-af01-cdcbd3dfa74c"",""type"":""error"",""links"":{},""actions"":{},""status"":404,""code"":""Not Found"",""message"":""Not Found"",""detail"":null}'
```
def _error(self, text):
```
> ```
> raise ApiError(self._unmarshall(text))
> ```
>
> E ApiError: (ApiError(...), ""Not Found : Not Found\n{'id': u'db6259a2-6bbf-4c2a-af01-cdcbd3dfa74c', 'actions': {}, 'type': u'error', 'status': 404, 'links': {}, 'code': u'Not Found', 'message': u'Not Found', 'detail': None}"")
",1.0,"API returns ""404 - Not Found"" error when trying to fetch container logs. - Server version - Build from master on Oct 10.
Following test cases which validates container logs fails with ""404 - Not Found"" error when trying to fetch container logs.
Manually when I try to fetch the logs of the container , I am able to get the container logs:
_______________________________ test_native_logs _______________________________
client = , socat_containers = None
native_name = 'native-test-728364', pull_images = None
```
def test_native_logs(client, socat_containers, native_name, pull_images):
docker_client = get_docker_client(host(client))
test_msg = 'LOGS_WORK'
docker_container = docker_client. \
create_container(NATIVE_TEST_IMAGE,
name=native_name,
tty=True,
stdin_open=True,
detach=True,
command=['/bin/bash', '-c', 'echo ' + test_msg])
rancher_container, _ = start_and_wait(client, docker_container,
docker_client,
native_name)
```
> ```
> found_msg = search_logs(rancher_container, test_msg)
> ```
cattlevalidationtest/core/test_native_docker.py:223:
---
cattlevalidationtest/core/test_native_docker.py:280: in search_logs
logs = container.logs()
.tox/py27/local/lib/python2.7/site-packages/gdapi.py:233: in
_args, *_kw)
.tox/py27/local/lib/python2.7/site-packages/gdapi.py:399: in action
return self._post(url, data=self._to_dict(_args, *_kw))
.tox/py27/local/lib/python2.7/site-packages/gdapi.py:62: in wrapped
return fn(_args, *_kw)
.tox/py27/local/lib/python2.7/site-packages/gdapi.py:273: in _post
self._error(r.text)
---
self =
text = '{""id"":""db6259a2-6bbf-4c2a-af01-cdcbd3dfa74c"",""type"":""error"",""links"":{},""actions"":{},""status"":404,""code"":""Not Found"",""message"":""Not Found"",""detail"":null}'
```
def _error(self, text):
```
> ```
> raise ApiError(self._unmarshall(text))
> ```
>
> E ApiError: (ApiError(...), ""Not Found : Not Found\n{'id': u'db6259a2-6bbf-4c2a-af01-cdcbd3dfa74c', 'actions': {}, 'type': u'error', 'status': 404, 'links': {}, 'code': u'Not Found', 'message': u'Not Found', 'detail': None}"")
",1,api returns not found error when trying to fetch container logs server version build from master on oct following test cases which validates container logs fails with not found error when trying to fetch container logs manually when i try to fetch the logs of the container i am able to get the container logs test native logs client socat containers none native name native test pull images none def test native logs client socat containers native name pull images docker client get docker client host client test msg logs work docker container docker client create container native test image name native name tty true stdin open true detach true command rancher container start and wait client docker container docker client native name found msg search logs rancher container test msg cattlevalidationtest core test native docker py cattlevalidationtest core test native docker py in search logs logs container logs tox local lib site packages gdapi py in args kw tox local lib site packages gdapi py in action return self post url data self to dict args kw tox local lib site packages gdapi py in wrapped return fn args kw tox local lib site packages gdapi py in post self error r text self text id type error links actions status code not found message not found detail null def error self text raise apierror self unmarshall text e apierror apierror not found not found n id u actions type u error status links code u not found message u not found detail none ,1
98528,4028235472.0,IssuesEvent,2016-05-18 04:49:56,lale-help/lale-help,https://api.github.com/repos/lale-help/lale-help,opened,Add documents to projects,priority:medium,"Similar to #307 (see spec here), we need to allow documents on projects.",1.0,"Add documents to projects - Similar to #307 (see spec here), we need to allow documents on projects.",0,add documents to projects similar to see spec here we need to allow documents on projects ,0
7965,25938258543.0,IssuesEvent,2022-12-16 15:55:37,kagemomiji/hubot-matteruser,https://api.github.com/repos/kagemomiji/hubot-matteruser,closed,Update CI actions,automation,"### Purpose
CI actions use old version's action and duplicated running.
### Tasks
- Only run at pull request
- check at only node 16
- upgrade actions version",1.0,"Update CI actions - ### Purpose
CI actions use old version's action and duplicated running.
### Tasks
- Only run at pull request
- check at only node 16
- upgrade actions version",1,update ci actions purpose ci actions use old version s action and duplicated running tasks only run at pull request check at only node upgrade actions version,1
4794,17539885065.0,IssuesEvent,2021-08-12 10:39:59,maxim-nazarenko/tf-module-update,https://api.github.com/repos/maxim-nazarenko/tf-module-update,closed,Add Github actions,automation,"Add actions to run tests.
It would be a start point for further automation like docker image builds, releases, etc.",1.0,"Add Github actions - Add actions to run tests.
It would be a start point for further automation like docker image builds, releases, etc.",1,add github actions add actions to run tests it would be a start point for further automation like docker image builds releases etc ,1
15731,10340902247.0,IssuesEvent,2019-09-03 23:46:54,MicrosoftDocs/azure-docs,https://api.github.com/repos/MicrosoftDocs/azure-docs,closed,Incorrect Azure CLI compatibility,Pri2 cxp doc-enhancement service-fabric-mesh/svc triaged,"This is the error:
Skipping 'mesh-0.10.6-py2.py3-none-any.whl' as not compatible with this version of the CLI. Extension compatibility result: is_compatible=False cli_core_version=2.0.45 min_required=2.0.67 max_required=None
---
#### Document details
⚠ *Do not edit this section. It is required for docs.microsoft.com ➟ GitHub issue linking.*
* ID: a9790045-bddf-bfe7-1d13-2738f0a988d0
* Version Independent ID: 75b735d5-0a64-7214-417c-b34301c63c79
* Content: [Set up the Azure Service Fabric Mesh CLI](https://docs.microsoft.com/en-gb/azure/service-fabric-mesh/service-fabric-mesh-howto-setup-cli)
* Content Source: [articles/service-fabric-mesh/service-fabric-mesh-howto-setup-cli.md](https://github.com/Microsoft/azure-docs/blob/master/articles/service-fabric-mesh/service-fabric-mesh-howto-setup-cli.md)
* Service: **service-fabric-mesh**
* GitHub Login: @dkkapur
* Microsoft Alias: **dekapur**",1.0,"Incorrect Azure CLI compatibility - This is the error:
Skipping 'mesh-0.10.6-py2.py3-none-any.whl' as not compatible with this version of the CLI. Extension compatibility result: is_compatible=False cli_core_version=2.0.45 min_required=2.0.67 max_required=None
---
#### Document details
⚠ *Do not edit this section. It is required for docs.microsoft.com ➟ GitHub issue linking.*
* ID: a9790045-bddf-bfe7-1d13-2738f0a988d0
* Version Independent ID: 75b735d5-0a64-7214-417c-b34301c63c79
* Content: [Set up the Azure Service Fabric Mesh CLI](https://docs.microsoft.com/en-gb/azure/service-fabric-mesh/service-fabric-mesh-howto-setup-cli)
* Content Source: [articles/service-fabric-mesh/service-fabric-mesh-howto-setup-cli.md](https://github.com/Microsoft/azure-docs/blob/master/articles/service-fabric-mesh/service-fabric-mesh-howto-setup-cli.md)
* Service: **service-fabric-mesh**
* GitHub Login: @dkkapur
* Microsoft Alias: **dekapur**",0,incorrect azure cli compatibility this is the error skipping mesh none any whl as not compatible with this version of the cli extension compatibility result is compatible false cli core version min required max required none document details ⚠ do not edit this section it is required for docs microsoft com ➟ github issue linking id bddf version independent id content content source service service fabric mesh github login dkkapur microsoft alias dekapur ,0
152804,12127046854.0,IssuesEvent,2020-04-22 18:02:34,cockroachdb/cockroach,https://api.github.com/repos/cockroachdb/cockroach,closed,roachtest: sqlsmith/setup=tpch-sf1/setting=no-mutations failed,C-test-failure O-roachtest O-robot branch-master release-blocker,"[(roachtest).sqlsmith/setup=tpch-sf1/setting=no-mutations failed](https://teamcity.cockroachdb.com/viewLog.html?buildId=1891353&tab=buildLog) on [master@056e32e84831f13b286fceb7681dd0cd2b00b4b4](https://github.com/cockroachdb/cockroach/commits/056e32e84831f13b286fceb7681dd0cd2b00b4b4):
```
The test failed on branch=master, cloud=gce:
test artifacts and logs in: /home/agent/work/.go/src/github.com/cockroachdb/cockroach/artifacts/sqlsmith/setup=tpch-sf1/setting=no-mutations/run_1
sqlsmith.go:185,sqlsmith.go:199,test_runner.go:753: ping: dial tcp 34.67.155.147:26257: connect: connection refused
previous sql:
SELECT
tab_111.ps_suppkey AS col_257
FROM
defaultdb.public.partsupp@[0] AS tab_111,
defaultdb.public.customer AS tab_112
JOIN defaultdb.public.customer AS tab_113 ON (tab_112.c_nationkey) = (tab_113.c_custkey),
defaultdb.public.customer@primary AS tab_114,
defaultdb.public.partsupp@[0] AS tab_115
WHERE
st_intersects('0101000000000000000000F03F000000000000F03F':::GEOMETRY::GEOMETRY, '0101000000000000000000F03F000000000000F03F':::GEOMETRY::GEOMETRY)::BOOL
LIMIT
23:::INT8;
cluster.go:1481,context.go:135,cluster.go:1470,test_runner.go:825: dead node detection: /home/agent/work/.go/src/github.com/cockroachdb/cockroach/bin/roachprod monitor teamcity-1891353-1587540974-06-n4cpu4 --oneshot --ignore-empty-nodes: exit status 1 3: 6463
1: dead
2: 6316
4: 10941
Error: UNCLASSIFIED_PROBLEM: 1: dead
(1) UNCLASSIFIED_PROBLEM
Wraps: (2) 1: dead
| main.glob..func13
| /home/agent/work/.go/src/github.com/cockroachdb/cockroach/pkg/cmd/roachprod/main.go:1129
| main.wrap.func1
| /home/agent/work/.go/src/github.com/cockroachdb/cockroach/pkg/cmd/roachprod/main.go:272
| github.com/cockroachdb/cockroach/vendor/github.com/spf13/cobra.(*Command).execute
| /home/agent/work/.go/src/github.com/cockroachdb/cockroach/vendor/github.com/spf13/cobra/command.go:766
| github.com/cockroachdb/cockroach/vendor/github.com/spf13/cobra.(*Command).ExecuteC
| /home/agent/work/.go/src/github.com/cockroachdb/cockroach/vendor/github.com/spf13/cobra/command.go:852
| github.com/cockroachdb/cockroach/vendor/github.com/spf13/cobra.(*Command).Execute
| /home/agent/work/.go/src/github.com/cockroachdb/cockroach/vendor/github.com/spf13/cobra/command.go:800
| main.main
| /home/agent/work/.go/src/github.com/cockroachdb/cockroach/pkg/cmd/roachprod/main.go:1793
| runtime.main
| /usr/local/go/src/runtime/proc.go:203
| runtime.goexit
| /usr/local/go/src/runtime/asm_amd64.s:1357
Error types: (1) errors.Unclassified (2) *errors.fundamental
```
More
Artifacts: [/sqlsmith/setup=tpch-sf1/setting=no-mutations](https://teamcity.cockroachdb.com/viewLog.html?buildId=1891353&tab=artifacts#/sqlsmith/setup=tpch-sf1/setting=no-mutations)
[See this test on roachdash](https://roachdash.crdb.dev/?filter=status%3Aopen+t%3A.%2Asqlsmith%2Fsetup%3Dtpch-sf1%2Fsetting%3Dno-mutations.%2A&sort=title&restgroup=false&display=lastcommented+project)
powered by [pkg/cmd/internal/issues](https://github.com/cockroachdb/cockroach/tree/master/pkg/cmd/internal/issues)
",2.0,"roachtest: sqlsmith/setup=tpch-sf1/setting=no-mutations failed - [(roachtest).sqlsmith/setup=tpch-sf1/setting=no-mutations failed](https://teamcity.cockroachdb.com/viewLog.html?buildId=1891353&tab=buildLog) on [master@056e32e84831f13b286fceb7681dd0cd2b00b4b4](https://github.com/cockroachdb/cockroach/commits/056e32e84831f13b286fceb7681dd0cd2b00b4b4):
```
The test failed on branch=master, cloud=gce:
test artifacts and logs in: /home/agent/work/.go/src/github.com/cockroachdb/cockroach/artifacts/sqlsmith/setup=tpch-sf1/setting=no-mutations/run_1
sqlsmith.go:185,sqlsmith.go:199,test_runner.go:753: ping: dial tcp 34.67.155.147:26257: connect: connection refused
previous sql:
SELECT
tab_111.ps_suppkey AS col_257
FROM
defaultdb.public.partsupp@[0] AS tab_111,
defaultdb.public.customer AS tab_112
JOIN defaultdb.public.customer AS tab_113 ON (tab_112.c_nationkey) = (tab_113.c_custkey),
defaultdb.public.customer@primary AS tab_114,
defaultdb.public.partsupp@[0] AS tab_115
WHERE
st_intersects('0101000000000000000000F03F000000000000F03F':::GEOMETRY::GEOMETRY, '0101000000000000000000F03F000000000000F03F':::GEOMETRY::GEOMETRY)::BOOL
LIMIT
23:::INT8;
cluster.go:1481,context.go:135,cluster.go:1470,test_runner.go:825: dead node detection: /home/agent/work/.go/src/github.com/cockroachdb/cockroach/bin/roachprod monitor teamcity-1891353-1587540974-06-n4cpu4 --oneshot --ignore-empty-nodes: exit status 1 3: 6463
1: dead
2: 6316
4: 10941
Error: UNCLASSIFIED_PROBLEM: 1: dead
(1) UNCLASSIFIED_PROBLEM
Wraps: (2) 1: dead
| main.glob..func13
| /home/agent/work/.go/src/github.com/cockroachdb/cockroach/pkg/cmd/roachprod/main.go:1129
| main.wrap.func1
| /home/agent/work/.go/src/github.com/cockroachdb/cockroach/pkg/cmd/roachprod/main.go:272
| github.com/cockroachdb/cockroach/vendor/github.com/spf13/cobra.(*Command).execute
| /home/agent/work/.go/src/github.com/cockroachdb/cockroach/vendor/github.com/spf13/cobra/command.go:766
| github.com/cockroachdb/cockroach/vendor/github.com/spf13/cobra.(*Command).ExecuteC
| /home/agent/work/.go/src/github.com/cockroachdb/cockroach/vendor/github.com/spf13/cobra/command.go:852
| github.com/cockroachdb/cockroach/vendor/github.com/spf13/cobra.(*Command).Execute
| /home/agent/work/.go/src/github.com/cockroachdb/cockroach/vendor/github.com/spf13/cobra/command.go:800
| main.main
| /home/agent/work/.go/src/github.com/cockroachdb/cockroach/pkg/cmd/roachprod/main.go:1793
| runtime.main
| /usr/local/go/src/runtime/proc.go:203
| runtime.goexit
| /usr/local/go/src/runtime/asm_amd64.s:1357
Error types: (1) errors.Unclassified (2) *errors.fundamental
```
More
Artifacts: [/sqlsmith/setup=tpch-sf1/setting=no-mutations](https://teamcity.cockroachdb.com/viewLog.html?buildId=1891353&tab=artifacts#/sqlsmith/setup=tpch-sf1/setting=no-mutations)
[See this test on roachdash](https://roachdash.crdb.dev/?filter=status%3Aopen+t%3A.%2Asqlsmith%2Fsetup%3Dtpch-sf1%2Fsetting%3Dno-mutations.%2A&sort=title&restgroup=false&display=lastcommented+project)
powered by [pkg/cmd/internal/issues](https://github.com/cockroachdb/cockroach/tree/master/pkg/cmd/internal/issues)
",0,roachtest sqlsmith setup tpch setting no mutations failed on the test failed on branch master cloud gce test artifacts and logs in home agent work go src github com cockroachdb cockroach artifacts sqlsmith setup tpch setting no mutations run sqlsmith go sqlsmith go test runner go ping dial tcp connect connection refused previous sql select tab ps suppkey as col from defaultdb public partsupp as tab defaultdb public customer as tab join defaultdb public customer as tab on tab c nationkey tab c custkey defaultdb public customer primary as tab defaultdb public partsupp as tab where st intersects geometry geometry geometry geometry bool limit cluster go context go cluster go test runner go dead node detection home agent work go src github com cockroachdb cockroach bin roachprod monitor teamcity oneshot ignore empty nodes exit status dead error unclassified problem dead unclassified problem wraps dead main glob home agent work go src github com cockroachdb cockroach pkg cmd roachprod main go main wrap home agent work go src github com cockroachdb cockroach pkg cmd roachprod main go github com cockroachdb cockroach vendor github com cobra command execute home agent work go src github com cockroachdb cockroach vendor github com cobra command go github com cockroachdb cockroach vendor github com cobra command executec home agent work go src github com cockroachdb cockroach vendor github com cobra command go github com cockroachdb cockroach vendor github com cobra command execute home agent work go src github com cockroachdb cockroach vendor github com cobra command go main main home agent work go src github com cockroachdb cockroach pkg cmd roachprod main go runtime main usr local go src runtime proc go runtime goexit usr local go src runtime asm s error types errors unclassified errors fundamental more artifacts powered by ,0
6012,21883751495.0,IssuesEvent,2022-05-19 16:28:25,mozilla-mobile/focus-ios,https://api.github.com/repos/mozilla-mobile/focus-ios,closed,Ugrade to Xcode 13.3,eng:automation,The new Xcode 13.3 is already available in Bitrise. Let's see if the app works and then upgrade it,1.0,Ugrade to Xcode 13.3 - The new Xcode 13.3 is already available in Bitrise. Let's see if the app works and then upgrade it,1,ugrade to xcode the new xcode is already available in bitrise let s see if the app works and then upgrade it,1
6175,22359272942.0,IssuesEvent,2022-06-15 18:43:16,aws/aws-iot-device-sdk-cpp-v2,https://api.github.com/repos/aws/aws-iot-device-sdk-cpp-v2,closed,Apple Silicon support,feature-request automation-exempt,"Hi,
Does the sdk supports Apple Silicon?
If so, where can I found the build and test instructions?
thanks",1.0,"Apple Silicon support - Hi,
Does the sdk supports Apple Silicon?
If so, where can I found the build and test instructions?
thanks",1,apple silicon support hi does the sdk supports apple silicon if so where can i found the build and test instructions thanks,1
787488,27719281769.0,IssuesEvent,2023-03-14 19:14:04,MasterCruelty/robbot,https://api.github.com/repos/MasterCruelty/robbot,closed,[2.0] 📑 /atm timetables as photo,enhancement low priority refactor,"* [x] Try to convert subway timetable from the pdf web link to an image thus we can send it directly instead of clicking the link.
* [x] The same thing in case the waiting time is None.
",1.0,"[2.0] 📑 /atm timetables as photo - * [x] Try to convert subway timetable from the pdf web link to an image thus we can send it directly instead of clicking the link.
* [x] The same thing in case the waiting time is None.
",0, 📑 atm timetables as photo try to convert subway timetable from the pdf web link to an image thus we can send it directly instead of clicking the link the same thing in case the waiting time is none ,0
220338,16938119534.0,IssuesEvent,2021-06-27 00:49:24,bevyengine/bevy,https://api.github.com/repos/bevyengine/bevy,opened,Add a transform and Quaternion example section. ,documentation needs-triage,"## How can Bevy's documentation be improved?
Currently there is not really a specific place to add transformation examples other than 2d or 3d. I think this would encourage more examples to be created about this topic.
Examples that could be included:
- translation of object
- rotation of objects
- scaling of objects
- translation, rotation, and scaling at the same time
- show the difference between global transform and Transform
- the 3d parenting example could be moved into here.
- more examples showcasing the various methods on Transform and Quaternion.
",1.0,"Add a transform and Quaternion example section. - ## How can Bevy's documentation be improved?
Currently there is not really a specific place to add transformation examples other than 2d or 3d. I think this would encourage more examples to be created about this topic.
Examples that could be included:
- translation of object
- rotation of objects
- scaling of objects
- translation, rotation, and scaling at the same time
- show the difference between global transform and Transform
- the 3d parenting example could be moved into here.
- more examples showcasing the various methods on Transform and Quaternion.
",0,add a transform and quaternion example section how can bevy s documentation be improved currently there is not really a specific place to add transformation examples other than or i think this would encourage more examples to be created about this topic examples that could be included translation of object rotation of objects scaling of objects translation rotation and scaling at the same time show the difference between global transform and transform the parenting example could be moved into here more examples showcasing the various methods on transform and quaternion ,0
4591,16964969647.0,IssuesEvent,2021-06-29 09:46:03,keptn/keptn,https://api.github.com/repos/keptn/keptn,opened,Unify automated PRs to have the pipeline logic in the target repo,area:core area:go-utils automation refactoring release-automation type:chore,"Right now, some of the automated PR are created from within the `keptn/keptn` repo but others are created from different repos like `keptn/go-utils` or `keptn/kubernetes-utils`. For an easier overview, this should be unified to that the dependency repos only send github events to the `keptn/keptn` repo and the update and PR creation logic should happen inside the main repo.
An example for this pattern can be seen in the `keptn/spec` repo which send an event to `keptn/go-utils` and `keptn/keptn` when a new release tag is created.
In the same manner `keptn/go-utils` and `keptn/kubernetes-utils` should send update events to `keptn/keptn`, which in turn updates its own dependencies accordingly.
",2.0,"Unify automated PRs to have the pipeline logic in the target repo - Right now, some of the automated PR are created from within the `keptn/keptn` repo but others are created from different repos like `keptn/go-utils` or `keptn/kubernetes-utils`. For an easier overview, this should be unified to that the dependency repos only send github events to the `keptn/keptn` repo and the update and PR creation logic should happen inside the main repo.
An example for this pattern can be seen in the `keptn/spec` repo which send an event to `keptn/go-utils` and `keptn/keptn` when a new release tag is created.
In the same manner `keptn/go-utils` and `keptn/kubernetes-utils` should send update events to `keptn/keptn`, which in turn updates its own dependencies accordingly.
",1,unify automated prs to have the pipeline logic in the target repo right now some of the automated pr are created from within the keptn keptn repo but others are created from different repos like keptn go utils or keptn kubernetes utils for an easier overview this should be unified to that the dependency repos only send github events to the keptn keptn repo and the update and pr creation logic should happen inside the main repo an example for this pattern can be seen in the keptn spec repo which send an event to keptn go utils and keptn keptn when a new release tag is created in the same manner keptn go utils and keptn kubernetes utils should send update events to keptn keptn which in turn updates its own dependencies accordingly ,1
8429,26964769121.0,IssuesEvent,2023-02-08 21:17:18,elastic/kibana,https://api.github.com/repos/elastic/kibana,closed,[Cloud Posture] Add functional tests for Findings Group By,automation Team:Cloud Security 8.8 candidate,"**Description**
We have 3 findings tables (names taken from folders):
- `latest_findings`
- `findings_by_resource`
- `resource_findings`
our dropdown enables switching between `latest_findings` and `findings_by_resource`. the latter enables navigating to `resource_findings`
the tests we add should verify all of that works :)
**Definition of done**
- Extend Findings service (page object) with action / assertion methods needed for Group By
- Implement test cases defined in [testrail](https://elastic.testrail.io/index.php?/suites/view/1116&group_by=cases:section_id&group_order=asc&display_deleted_cases=0&group_id=35310) for Findings Group By
**Out of scope**
- TBD
**Related tasks/epics**
- https://github.com/elastic/kibana/issues/140484 (Initial setup)
- https://github.com/elastic/kibana/issues/140490
**Checklist**
Please follow the following checklist in the beginning of your work, please comment with a suggested of high level solution. It should include:
- [ ] Comment describing high level implementation details
- [ ] Include API and data models
- [ ] Include assumptions being taken
- [ ] Provide backward/forward compatibility when changing data model schemas and key constants
- [ ] Mention relevant individuals with a reason (getting feedback, fyi etc)
- [ ] Submit a PR for our [technical index](https://github.com/elastic/security-team/blob/main/docs/cloud-security-posture-team/Technical_Index.md) that includes breaking changes/ new features
**Before closing this ticket**
- [ ] Commit the [technical index](https://github.com/elastic/security-team/blob/main/docs/cloud-security-posture-team/Technical_Index.md) PR
- [ ] Reference to tech-debts that shall be solved as we move forward
",1.0,"[Cloud Posture] Add functional tests for Findings Group By - **Description**
We have 3 findings tables (names taken from folders):
- `latest_findings`
- `findings_by_resource`
- `resource_findings`
our dropdown enables switching between `latest_findings` and `findings_by_resource`. the latter enables navigating to `resource_findings`
the tests we add should verify all of that works :)
**Definition of done**
- Extend Findings service (page object) with action / assertion methods needed for Group By
- Implement test cases defined in [testrail](https://elastic.testrail.io/index.php?/suites/view/1116&group_by=cases:section_id&group_order=asc&display_deleted_cases=0&group_id=35310) for Findings Group By
**Out of scope**
- TBD
**Related tasks/epics**
- https://github.com/elastic/kibana/issues/140484 (Initial setup)
- https://github.com/elastic/kibana/issues/140490
**Checklist**
Please follow the following checklist in the beginning of your work, please comment with a suggested of high level solution. It should include:
- [ ] Comment describing high level implementation details
- [ ] Include API and data models
- [ ] Include assumptions being taken
- [ ] Provide backward/forward compatibility when changing data model schemas and key constants
- [ ] Mention relevant individuals with a reason (getting feedback, fyi etc)
- [ ] Submit a PR for our [technical index](https://github.com/elastic/security-team/blob/main/docs/cloud-security-posture-team/Technical_Index.md) that includes breaking changes/ new features
**Before closing this ticket**
- [ ] Commit the [technical index](https://github.com/elastic/security-team/blob/main/docs/cloud-security-posture-team/Technical_Index.md) PR
- [ ] Reference to tech-debts that shall be solved as we move forward
",1, add functional tests for findings group by description we have findings tables names taken from folders latest findings findings by resource resource findings our dropdown enables switching between latest findings and findings by resource the latter enables navigating to resource findings the tests we add should verify all of that works definition of done extend findings service page object with action assertion methods needed for group by implement test cases defined in for findings group by out of scope tbd related tasks epics initial setup checklist please follow the following checklist in the beginning of your work please comment with a suggested of high level solution it should include comment describing high level implementation details include api and data models include assumptions being taken provide backward forward compatibility when changing data model schemas and key constants mention relevant individuals with a reason getting feedback fyi etc submit a pr for our that includes breaking changes new features before closing this ticket commit the pr reference to tech debts that shall be solved as we move forward ,1
38806,12601781437.0,IssuesEvent,2020-06-11 10:28:10,rammatzkvosky/1010-1,https://api.github.com/repos/rammatzkvosky/1010-1,opened,CVE-2020-10969 (High) detected in jackson-databind-2.8.8.jar,security vulnerability,"## CVE-2020-10969 - High Severity Vulnerability
Vulnerable Library - jackson-databind-2.8.8.jar
General data-binding functionality for Jackson: works on core streaming API
FasterXML jackson-databind 2.x before 2.9.10.4 mishandles the interaction between serialization gadgets and typing, related to javax.swing.JEditorPane.
FasterXML jackson-databind 2.x before 2.9.10.4 mishandles the interaction between serialization gadgets and typing, related to javax.swing.JEditorPane.
",0,cve high detected in jackson databind jar cve high severity vulnerability vulnerable library jackson databind jar general data binding functionality for jackson works on core streaming api library home page a href path to dependency file tmp ws scm pom xml path to vulnerable library epository com fasterxml jackson core jackson databind jackson databind jar dependency hierarchy x jackson databind jar vulnerable library found in head commit a href vulnerability details fasterxml jackson databind x before mishandles the interaction between serialization gadgets and typing related to javax swing jeditorpane publish date url a href cvss score details base score metrics exploitability metrics attack vector network attack complexity low privileges required none user interaction required scope unchanged impact metrics confidentiality impact high integrity impact high availability impact high for more information on scores click a href suggested fix type upgrade version origin a href release date fix resolution com fasterxml jackson core jackson databind com fasterxml jackson core jackson databind isopenpronvulnerability true ispackagebased true isdefaultbranch true packages vulnerabilityidentifier cve vulnerabilitydetails fasterxml jackson databind x before mishandles the interaction between serialization gadgets and typing related to javax swing jeditorpane vulnerabilityurl ,0
324499,9904702201.0,IssuesEvent,2019-06-27 09:45:50,kubernetes-sigs/cluster-api-provider-gcp,https://api.github.com/repos/kubernetes-sigs/cluster-api-provider-gcp,closed, [FR] authentication with GCP,lifecycle/rotten priority/important-soon,Currently the authentication is done via cloud service account. Allow authentication similar to that in https://github.com/kubernetes/kubernetes/blob/master/pkg/cloudprovider/providers/gce/gce.go#L305,1.0, [FR] authentication with GCP - Currently the authentication is done via cloud service account. Allow authentication similar to that in https://github.com/kubernetes/kubernetes/blob/master/pkg/cloudprovider/providers/gce/gce.go#L305,0, authentication with gcp currently the authentication is done via cloud service account allow authentication similar to that in ,0
269199,20376113961.0,IssuesEvent,2022-02-21 15:46:46,Diversion2k22/Samarpan,https://api.github.com/repos/Diversion2k22/Samarpan,closed,Update README.md,documentation enhancement good first issue diversion-2k22,"### Update/ Improve README.md
- Add the image/ thumbnail of the application inside the file
- Add a section explaining the way of accessing the documents with screenshots
",1.0,"Update README.md - ### Update/ Improve README.md
- Add the image/ thumbnail of the application inside the file
- Add a section explaining the way of accessing the documents with screenshots
",0,update readme md update improve readme md add the image thumbnail of the application inside the file add a section explaining the way of accessing the documents with screenshots ,0
720,7887587826.0,IssuesEvent,2018-06-27 18:59:04,MicrosoftDocs/azure-docs,https://api.github.com/repos/MicrosoftDocs/azure-docs,closed,Shouldn't Get-AutomationConnection be replaced by Get-AzureRmAutomationConnection ?,assigned-to-author automation/svc doc-enhancement triaged,"
---
#### Document Details
⚠ *Do not edit this section. It is required for docs.microsoft.com ➟ GitHub issue linking.*
* ID: 038d927f-2bcc-c62d-b3c3-f194513bced6
* Version Independent ID: 41adf2c5-3ab7-7387-e541-89e34aa6a6b1
* Content: [My first PowerShell runbook in Azure Automation](https://docs.microsoft.com/en-us/azure/automation/automation-first-runbook-textual-powershell)
* Content Source: [articles/automation/automation-first-runbook-textual-powershell.md](https://github.com/Microsoft/azure-docs/blob/master/articles/automation/automation-first-runbook-textual-powershell.md)
* Service: **automation**
* GitHub Login: @georgewallace
* Microsoft Alias: **gwallace**",1.0,"Shouldn't Get-AutomationConnection be replaced by Get-AzureRmAutomationConnection ? -
---
#### Document Details
⚠ *Do not edit this section. It is required for docs.microsoft.com ➟ GitHub issue linking.*
* ID: 038d927f-2bcc-c62d-b3c3-f194513bced6
* Version Independent ID: 41adf2c5-3ab7-7387-e541-89e34aa6a6b1
* Content: [My first PowerShell runbook in Azure Automation](https://docs.microsoft.com/en-us/azure/automation/automation-first-runbook-textual-powershell)
* Content Source: [articles/automation/automation-first-runbook-textual-powershell.md](https://github.com/Microsoft/azure-docs/blob/master/articles/automation/automation-first-runbook-textual-powershell.md)
* Service: **automation**
* GitHub Login: @georgewallace
* Microsoft Alias: **gwallace**",1,shouldn t get automationconnection be replaced by get azurermautomationconnection document details ⚠ do not edit this section it is required for docs microsoft com ➟ github issue linking id version independent id content content source service automation github login georgewallace microsoft alias gwallace ,1
6803,23936270938.0,IssuesEvent,2022-09-11 09:18:27,AdamXweb/awesome-aussie,https://api.github.com/repos/AdamXweb/awesome-aussie,opened,[ADDITION] Lumachain,Awaiting Review Added to Airtable Automation from Airtable,"### Category
Logistics
### Software to be added
Lumachain
### Supporting Material
URL: https://lumachain.io/
Description: Lumachain is a supply chain platform designed for tracking the origin, condition, and location of items in the food supply chain.
Size:
HQ: Sydney
LinkedIn: https://www.linkedin.com/company/lumachain/
#### See Record on Airtable:
https://airtable.com/app0Ox7pXdrBUIn23/tblYbuZoILuVA0X3L/rec7kKLaBKEWojA1f",1.0,"[ADDITION] Lumachain - ### Category
Logistics
### Software to be added
Lumachain
### Supporting Material
URL: https://lumachain.io/
Description: Lumachain is a supply chain platform designed for tracking the origin, condition, and location of items in the food supply chain.
Size:
HQ: Sydney
LinkedIn: https://www.linkedin.com/company/lumachain/
#### See Record on Airtable:
https://airtable.com/app0Ox7pXdrBUIn23/tblYbuZoILuVA0X3L/rec7kKLaBKEWojA1f",1, lumachain category logistics software to be added lumachain supporting material url description lumachain is a supply chain platform designed for tracking the origin condition and location of items in the food supply chain size hq sydney linkedin see record on airtable ,1
7967,25945852349.0,IssuesEvent,2022-12-17 00:49:01,influxdata/ui,https://api.github.com/repos/influxdata/ui,closed,Add function to New Script Editor toggle when it's turned off to trigger the VWO survey,team/ui team/automation,"In order to trigger the VWO survey after a user has turned off the New Script Editor, we need to include the logic in that specific slide toggle component to trigger the survey.
Here is the code we need to add to onClick prop of the New Script Editor slide toggle:
`executeTrigger();`",1.0,"Add function to New Script Editor toggle when it's turned off to trigger the VWO survey - In order to trigger the VWO survey after a user has turned off the New Script Editor, we need to include the logic in that specific slide toggle component to trigger the survey.
Here is the code we need to add to onClick prop of the New Script Editor slide toggle:
`executeTrigger();`",1,add function to new script editor toggle when it s turned off to trigger the vwo survey in order to trigger the vwo survey after a user has turned off the new script editor we need to include the logic in that specific slide toggle component to trigger the survey here is the code we need to add to onclick prop of the new script editor slide toggle executetrigger ,1
180974,14844376867.0,IssuesEvent,2021-01-17 01:15:39,Objective-Redux/Objective-Redux,https://api.github.com/repos/Objective-Redux/Objective-Redux,closed,Add documentation page about adding new take effects,documentation,"**Executive Summary**
There should be a documentation topic that illustrates how to create new take effects creators.
**Justification**
Not every take effect/configuration can be supported by Objective-Redux. As such, the documentation should show how to create new ones without relying solely on what's provided.
**Proposed Solution**
A topic in the documentation for creating new take effects for Objective-Redux. The topic page should also link to the source code, which is a good example of how to create them.
**Current workarounds**
Describe what is currently being done to solve the problem in the absence of this feature.
**Work involved**
Include any research you've done into how the solution might be implemented.",1.0,"Add documentation page about adding new take effects - **Executive Summary**
There should be a documentation topic that illustrates how to create new take effects creators.
**Justification**
Not every take effect/configuration can be supported by Objective-Redux. As such, the documentation should show how to create new ones without relying solely on what's provided.
**Proposed Solution**
A topic in the documentation for creating new take effects for Objective-Redux. The topic page should also link to the source code, which is a good example of how to create them.
**Current workarounds**
Describe what is currently being done to solve the problem in the absence of this feature.
**Work involved**
Include any research you've done into how the solution might be implemented.",0,add documentation page about adding new take effects executive summary there should be a documentation topic that illustrates how to create new take effects creators justification not every take effect configuration can be supported by objective redux as such the documentation should show how to create new ones without relying solely on what s provided proposed solution a topic in the documentation for creating new take effects for objective redux the topic page should also link to the source code which is a good example of how to create them current workarounds describe what is currently being done to solve the problem in the absence of this feature work involved include any research you ve done into how the solution might be implemented ,0
2903,12753824708.0,IssuesEvent,2020-06-28 01:02:07,turicas/covid19-br,https://api.github.com/repos/turicas/covid19-br,closed,Corrigir falha no GH Action do goodtables,automation,"A GitHub Action do gootables, está falhando faz tempo, sem ser notado.
O erro deve ser corrigido, e como uma forma de dar mais evidência ao status desta Action, que é agendada para executar uma vez por dia, seria interessante colocar um badge no `README.md`.
https://github.com/turicas/covid19-br/actions?query=workflow%3Agoodtables",1.0,"Corrigir falha no GH Action do goodtables - A GitHub Action do gootables, está falhando faz tempo, sem ser notado.
O erro deve ser corrigido, e como uma forma de dar mais evidência ao status desta Action, que é agendada para executar uma vez por dia, seria interessante colocar um badge no `README.md`.
https://github.com/turicas/covid19-br/actions?query=workflow%3Agoodtables",1,corrigir falha no gh action do goodtables a github action do gootables está falhando faz tempo sem ser notado o erro deve ser corrigido e como uma forma de dar mais evidência ao status desta action que é agendada para executar uma vez por dia seria interessante colocar um badge no readme md ,1
9254,27801623364.0,IssuesEvent,2023-03-17 16:15:13,Automattic/sensei,https://api.github.com/repos/Automattic/sensei,opened,Update the next version script to exclude code behind a feature flag,[Type] Enhancement Release Automation,"
### Is your feature request related to a problem? Please describe
We need to figure out how to
update `scripts/replace-next-version-tag.sh` so that it doesn't replace the `$$next-version$$` placeholders for code that is behind a feature flag and has not yet shipped.
### Goals
- [ ] Communicate the proposed approach and ensure buy-in has been obtained before starting on implementation. This is especially important for any process changes that may be required.
- [ ] Implement a solution such that the `$$next-version$$` placeholders are **not** replaced for any code in `trunk` that is behind a feature flag.",1.0,"Update the next version script to exclude code behind a feature flag -
### Is your feature request related to a problem? Please describe
We need to figure out how to
update `scripts/replace-next-version-tag.sh` so that it doesn't replace the `$$next-version$$` placeholders for code that is behind a feature flag and has not yet shipped.
### Goals
- [ ] Communicate the proposed approach and ensure buy-in has been obtained before starting on implementation. This is especially important for any process changes that may be required.
- [ ] Implement a solution such that the `$$next-version$$` placeholders are **not** replaced for any code in `trunk` that is behind a feature flag.",1,update the next version script to exclude code behind a feature flag is your feature request related to a problem please describe we need to figure out how to update scripts replace next version tag sh so that it doesn t replace the next version placeholders for code that is behind a feature flag and has not yet shipped goals communicate the proposed approach and ensure buy in has been obtained before starting on implementation this is especially important for any process changes that may be required implement a solution such that the next version placeholders are not replaced for any code in trunk that is behind a feature flag ,1
42067,9126344826.0,IssuesEvent,2019-02-24 20:50:40,C0ZEN/ngx-store-test,https://api.github.com/repos/C0ZEN/ngx-store-test,closed,"Fix ""identical-code"" issue in src/app/views/todos/todos.component.ts",codeclimate,"Identical blocks of code found in 2 locations. Consider refactoring.
https://codeclimate.com/github/C0ZEN/ngx-store-test/src/app/views/todos/todos.component.ts#issue_5c72f6e276cfa600010000ff",1.0,"Fix ""identical-code"" issue in src/app/views/todos/todos.component.ts - Identical blocks of code found in 2 locations. Consider refactoring.
https://codeclimate.com/github/C0ZEN/ngx-store-test/src/app/views/todos/todos.component.ts#issue_5c72f6e276cfa600010000ff",0,fix identical code issue in src app views todos todos component ts identical blocks of code found in locations consider refactoring ,0
4163,15691963525.0,IssuesEvent,2021-03-25 18:31:15,ScottEgan/HomeAssistantConfig,https://api.github.com/repos/ScottEgan/HomeAssistantConfig,opened,Set bedroom pico to control all bedroom lights,automations,Use the off button to control bedroom light group,1.0,Set bedroom pico to control all bedroom lights - Use the off button to control bedroom light group,1,set bedroom pico to control all bedroom lights use the off button to control bedroom light group,1
435113,12531526153.0,IssuesEvent,2020-06-04 14:39:10,webcompat/web-bugs,https://api.github.com/repos/webcompat/web-bugs,closed,hanime.tv - see bug description,browser-focus-geckoview engine-gecko priority-normal,"
**URL**: https://hanime.tv/search
**Browser / Version**: Firefox Mobile 76.0
**Operating System**: Android
**Tested Another Browser**: No
**Problem type**: Something else
**Description**: doesn't block ads
**Steps to Reproduce**:
It doesn't block ads, desktop ad blockers do.
Browser Configuration
None
_From [webcompat.com](https://webcompat.com/) with ❤️_",1.0,"hanime.tv - see bug description -
**URL**: https://hanime.tv/search
**Browser / Version**: Firefox Mobile 76.0
**Operating System**: Android
**Tested Another Browser**: No
**Problem type**: Something else
**Description**: doesn't block ads
**Steps to Reproduce**:
It doesn't block ads, desktop ad blockers do.
Browser Configuration
None
_From [webcompat.com](https://webcompat.com/) with ❤️_",0,hanime tv see bug description url browser version firefox mobile operating system android tested another browser no problem type something else description doesn t block ads steps to reproduce it doesn t block ads desktop ad blockers do browser configuration none from with ❤️ ,0
472261,13621705345.0,IssuesEvent,2020-09-24 01:28:45,open-telemetry/opentelemetry-java,https://api.github.com/repos/open-telemetry/opentelemetry-java,closed,Update the attribute names for the OTel attributes for the zipkin exporter,good first issue help wanted priority:p2 release:required-for-ga,"See here: https://github.com/open-telemetry/opentelemetry-specification/pull/967
(although, we didn't match the previously spec'd names, either).",1.0,"Update the attribute names for the OTel attributes for the zipkin exporter - See here: https://github.com/open-telemetry/opentelemetry-specification/pull/967
(although, we didn't match the previously spec'd names, either).",0,update the attribute names for the otel attributes for the zipkin exporter see here although we didn t match the previously spec d names either ,0
32769,12149113902.0,IssuesEvent,2020-04-24 15:37:40,TreyM-WSS/terra-clinical,https://api.github.com/repos/TreyM-WSS/terra-clinical,opened,"CVE-2018-19797 (Medium) detected in node-sass-v4.13.1, CSS::Sass-v3.6.0",security vulnerability,"## CVE-2018-19797 - Medium Severity Vulnerability
Vulnerable Libraries -
Vulnerability Details
In LibSass 3.5.5, a NULL Pointer Dereference in the function Sass::Selector_List::populate_extends in SharedPtr.hpp (used by ast.cpp and ast_selectors.cpp) may cause a Denial of Service (application crash) via a crafted sass input file.
",True,"CVE-2018-19797 (Medium) detected in node-sass-v4.13.1, CSS::Sass-v3.6.0 - ## CVE-2018-19797 - Medium Severity Vulnerability
Vulnerable Libraries -
Vulnerability Details
In LibSass 3.5.5, a NULL Pointer Dereference in the function Sass::Selector_List::populate_extends in SharedPtr.hpp (used by ast.cpp and ast_selectors.cpp) may cause a Denial of Service (application crash) via a crafted sass input file.
",0,cve medium detected in node sass css sass cve medium severity vulnerability vulnerable libraries vulnerability details in libsass a null pointer dereference in the function sass selector list populate extends in sharedptr hpp used by ast cpp and ast selectors cpp may cause a denial of service application crash via a crafted sass input file publish date url a href cvss score details base score metrics exploitability metrics attack vector network attack complexity low privileges required none user interaction required scope unchanged impact metrics confidentiality impact none integrity impact none availability impact high for more information on scores click a href suggested fix type upgrade version origin a href release date fix resolution libsass isopenpronvulnerability true ispackagebased true isdefaultbranch true packages vulnerabilityidentifier cve vulnerabilitydetails in libsass a null pointer dereference in the function sass selector list populate extends in sharedptr hpp used by ast cpp and ast selectors cpp may cause a denial of service application crash via a crafted sass input file vulnerabilityurl ,0
515563,14965298284.0,IssuesEvent,2021-01-27 13:12:13,jbroutier/whatisflying-db,https://api.github.com/repos/jbroutier/whatisflying-db,opened,Missing Lockheed aircraft types pictures,Category: Aircraft type Priority: Normal,"Add pictures for the following aircraft types:
- [ ] L-14 Super Electra
- [ ] Ventura
- [ ] YO-3 Quiet Star
- [ ] T-33 Silver Star
- [ ] U-2",1.0,"Missing Lockheed aircraft types pictures - Add pictures for the following aircraft types:
- [ ] L-14 Super Electra
- [ ] Ventura
- [ ] YO-3 Quiet Star
- [ ] T-33 Silver Star
- [ ] U-2",0,missing lockheed aircraft types pictures add pictures for the following aircraft types l super electra ventura yo quiet star t silver star u ,0
6767,23874305766.0,IssuesEvent,2022-09-07 17:29:27,Journeyman-dev/FossSweeper,https://api.github.com/repos/Journeyman-dev/FossSweeper,closed,Verify Code Linting with Trunk on Push,automation,A GitHub Action should run on pushes that uses Trunk to verify if all files are linted correctly. This check will be a requirement for pull requests to be merged in order to ensure code quality.,1.0,Verify Code Linting with Trunk on Push - A GitHub Action should run on pushes that uses Trunk to verify if all files are linted correctly. This check will be a requirement for pull requests to be merged in order to ensure code quality.,1,verify code linting with trunk on push a github action should run on pushes that uses trunk to verify if all files are linted correctly this check will be a requirement for pull requests to be merged in order to ensure code quality ,1
35033,7887357875.0,IssuesEvent,2018-06-27 18:12:35,dotnet/coreclr,https://api.github.com/repos/dotnet/coreclr,closed,Remove sbyte overloads of Intel AES intrinsics,area-CodeGen enhancement,"Currently, each `Aes` intrinsic has `byte` and `sbyte` overloads, but the unsigned `byte` is probably sufficient for data encryption/decryption operations.
```csharp
public static class Aes
{
public static bool IsSupported { get => IsSupported; }
public static Vector128 Decrypt(Vector128 value, Vector128 roundKey) => Decrypt(value, roundKey);
public static Vector128 Decrypt(Vector128 value, Vector128 roundKey) => Decrypt(value, roundKey);
public static Vector128 DecryptLast(Vector128 value, Vector128 roundKey) => DecryptLast(value, roundKey);
public static Vector128 DecryptLast(Vector128 value, Vector128 roundKey) => DecryptLast(value, roundKey);
public static Vector128 Encrypt(Vector128 value, Vector128 roundKey) => Encrypt(value, roundKey);
public static Vector128 Encrypt(Vector128 value, Vector128 roundKey) => Encrypt(value, roundKey);
public static Vector128 EncryptLast(Vector128 value, Vector128 roundKey) => EncryptLast(value, roundKey);
public static Vector128 EncryptLast(Vector128 value, Vector128 roundKey) => EncryptLast(value, roundKey);
public static Vector128 InvisibleMixColumn(Vector128 value) => InvisibleMixColumn(value);
public static Vector128 InvisibleMixColumn(Vector128 value) => InvisibleMixColumn(value);
public static Vector128 KeygenAssist(Vector128 value, byte control) => KeygenAssist(value, control);
public static Vector128 KeygenAssist(Vector128 value, byte control) => KeygenAssist(value, control);
}
```",1.0,"Remove sbyte overloads of Intel AES intrinsics - Currently, each `Aes` intrinsic has `byte` and `sbyte` overloads, but the unsigned `byte` is probably sufficient for data encryption/decryption operations.
```csharp
public static class Aes
{
public static bool IsSupported { get => IsSupported; }
public static Vector128 Decrypt(Vector128 value, Vector128 roundKey) => Decrypt(value, roundKey);
public static Vector128 Decrypt(Vector128 value, Vector128 roundKey) => Decrypt(value, roundKey);
public static Vector128 DecryptLast(Vector128 value, Vector128 roundKey) => DecryptLast(value, roundKey);
public static Vector128 DecryptLast(Vector128 value, Vector128 roundKey) => DecryptLast(value, roundKey);
public static Vector128 Encrypt(Vector128 value, Vector128 roundKey) => Encrypt(value, roundKey);
public static Vector128 Encrypt(Vector128 value, Vector128 roundKey) => Encrypt(value, roundKey);
public static Vector128 EncryptLast(Vector128 value, Vector128 roundKey) => EncryptLast(value, roundKey);
public static Vector128 EncryptLast(Vector128 value, Vector128 roundKey) => EncryptLast(value, roundKey);
public static Vector128 InvisibleMixColumn(Vector128 value) => InvisibleMixColumn(value);
public static Vector128 InvisibleMixColumn(Vector128 value) => InvisibleMixColumn(value);
public static Vector128 KeygenAssist(Vector128 value, byte control) => KeygenAssist(value, control);
public static Vector128 KeygenAssist(Vector128 value, byte control) => KeygenAssist(value, control);
}
```",0,remove sbyte overloads of intel aes intrinsics currently each aes intrinsic has byte and sbyte overloads but the unsigned byte is probably sufficient for data encryption decryption operations csharp public static class aes public static bool issupported get issupported public static decrypt value roundkey decrypt value roundkey public static decrypt value roundkey decrypt value roundkey public static decryptlast value roundkey decryptlast value roundkey public static decryptlast value roundkey decryptlast value roundkey public static encrypt value roundkey encrypt value roundkey public static encrypt value roundkey encrypt value roundkey public static encryptlast value roundkey encryptlast value roundkey public static encryptlast value roundkey encryptlast value roundkey public static invisiblemixcolumn value invisiblemixcolumn value public static invisiblemixcolumn value invisiblemixcolumn value public static keygenassist value byte control keygenassist value control public static keygenassist value byte control keygenassist value control ,0
284014,24576937942.0,IssuesEvent,2022-10-13 13:03:09,mozilla-mobile/focus-android,https://api.github.com/repos/mozilla-mobile/focus-android,closed,Intermittent UI test failure - < AddToHomescreenTest. noNameShortcutTest >,eng:ui-test eng:intermittent-test,"### Firebase Test Run: 💥 Failed 1x
❗ In one run it failed due to:
[Firebase link](https://console.firebase.google.com/u/0/project/moz-focus-android/testlab/histories/bh.2189b040bbce6d5a/matrices/7859466701158352881/executions/bs.743fcfea29776606/testcases/1/test-cases)
### Stacktrace:
java.lang.AssertionError
at org.junit.Assert.fail(Assert.java:87)
at org.junit.Assert.assertTrue(Assert.java:42)
at org.junit.Assert.assertTrue(Assert.java:53)
at org.mozilla.focus.activity.robots.SearchRobot.typeInSearchBar(SearchRobot.kt:23)
at org.mozilla.focus.activity.robots.SearchRobot$Transition$loadPage$1.invoke(SearchRobot.kt:84)
at org.mozilla.focus.activity.robots.SearchRobot$Transition$loadPage$1.invoke(SearchRobot.kt:84)
at org.mozilla.focus.activity.robots.SearchRobotKt.searchScreen(SearchRobot.kt:122)
at org.mozilla.focus.activity.robots.SearchRobot$Transition.loadPage(SearchRobot.kt:84)
at org.mozilla.focus.activity.AddToHomescreenTest.noNameShortcutTest(AddToHomescreenTest.kt:77)
❗ In the other run it failed fur to the ANR [#7344](https://github.com/mozilla-mobile/focus-android/issues/7344#issuecomment-1249132497)
### Build:9/16 Main
",2.0,"Intermittent UI test failure - < AddToHomescreenTest. noNameShortcutTest > - ### Firebase Test Run: 💥 Failed 1x
❗ In one run it failed due to:
[Firebase link](https://console.firebase.google.com/u/0/project/moz-focus-android/testlab/histories/bh.2189b040bbce6d5a/matrices/7859466701158352881/executions/bs.743fcfea29776606/testcases/1/test-cases)
### Stacktrace:
java.lang.AssertionError
at org.junit.Assert.fail(Assert.java:87)
at org.junit.Assert.assertTrue(Assert.java:42)
at org.junit.Assert.assertTrue(Assert.java:53)
at org.mozilla.focus.activity.robots.SearchRobot.typeInSearchBar(SearchRobot.kt:23)
at org.mozilla.focus.activity.robots.SearchRobot$Transition$loadPage$1.invoke(SearchRobot.kt:84)
at org.mozilla.focus.activity.robots.SearchRobot$Transition$loadPage$1.invoke(SearchRobot.kt:84)
at org.mozilla.focus.activity.robots.SearchRobotKt.searchScreen(SearchRobot.kt:122)
at org.mozilla.focus.activity.robots.SearchRobot$Transition.loadPage(SearchRobot.kt:84)
at org.mozilla.focus.activity.AddToHomescreenTest.noNameShortcutTest(AddToHomescreenTest.kt:77)
❗ In the other run it failed fur to the ANR [#7344](https://github.com/mozilla-mobile/focus-android/issues/7344#issuecomment-1249132497)
### Build:9/16 Main
",0,intermittent ui test failure firebase test run 💥 failed ❗ in one run it failed due to stacktrace java lang assertionerror at org junit assert fail assert java at org junit assert asserttrue assert java at org junit assert asserttrue assert java at org mozilla focus activity robots searchrobot typeinsearchbar searchrobot kt at org mozilla focus activity robots searchrobot transition loadpage invoke searchrobot kt at org mozilla focus activity robots searchrobot transition loadpage invoke searchrobot kt at org mozilla focus activity robots searchrobotkt searchscreen searchrobot kt at org mozilla focus activity robots searchrobot transition loadpage searchrobot kt at org mozilla focus activity addtohomescreentest nonameshortcuttest addtohomescreentest kt ❗ in the other run it failed fur to the anr build main ,0
8248,26568923854.0,IssuesEvent,2023-01-20 23:59:30,influxdata/ui,https://api.github.com/repos/influxdata/ui,closed,Schema composition: treatment of Tags versus Fields.,enhancement team/automation,"Per discussion with the team. We are looking to treat Tags as a clearly different filter than Fields. We are looking to do the following:
* confirm that Tag use earlier in the query (before fields), improves the query performance. (e.g. large data sets)
* in the UI:
* have the schema browser be skewing the user to add tag filters earlier (rather than as the last thing in the schema browser list).
* other UI decisions -- TBD.
* in the flux-lsp:
* Tag versus Filter injection (in schema composition):
* Tag filters may be added multiple times in the query, in an append fashion.
* inject still done using the schema composition
* but the tag would be added as a new filter each time.
* newly added Fields
* extend in the existing Field filter.
",1.0,"Schema composition: treatment of Tags versus Fields. - Per discussion with the team. We are looking to treat Tags as a clearly different filter than Fields. We are looking to do the following:
* confirm that Tag use earlier in the query (before fields), improves the query performance. (e.g. large data sets)
* in the UI:
* have the schema browser be skewing the user to add tag filters earlier (rather than as the last thing in the schema browser list).
* other UI decisions -- TBD.
* in the flux-lsp:
* Tag versus Filter injection (in schema composition):
* Tag filters may be added multiple times in the query, in an append fashion.
* inject still done using the schema composition
* but the tag would be added as a new filter each time.
* newly added Fields
* extend in the existing Field filter.
",1,schema composition treatment of tags versus fields per discussion with the team we are looking to treat tags as a clearly different filter than fields we are looking to do the following confirm that tag use earlier in the query before fields improves the query performance e g large data sets in the ui have the schema browser be skewing the user to add tag filters earlier rather than as the last thing in the schema browser list other ui decisions tbd in the flux lsp tag versus filter injection in schema composition tag filters may be added multiple times in the query in an append fashion inject still done using the schema composition but the tag would be added as a new filter each time newly added fields extend in the existing field filter ,1
221040,24590548906.0,IssuesEvent,2022-10-14 01:28:22,vincenzodistasio97/home-cloud,https://api.github.com/repos/vincenzodistasio97/home-cloud,opened,"CVE-2022-37601 (High) detected in loader-utils-1.4.0.tgz, loader-utils-1.2.3.tgz",security vulnerability,"## CVE-2022-37601 - High Severity Vulnerability
Vulnerable Libraries - loader-utils-1.4.0.tgz, loader-utils-1.2.3.tgz
Path to vulnerable library: /client/node_modules/adjust-sourcemap-loader/node_modules/loader-utils/package.json,/client/node_modules/resolve-url-loader/node_modules/loader-utils/package.json,/client/node_modules/react-dev-utils/node_modules/loader-utils/package.json
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: High
- Integrity Impact: High
- Availability Impact: High
For more information on CVSS3 Scores, click here.
Suggested Fix
Type: Upgrade version
Release Date: 2022-10-12
Fix Resolution (loader-utils): 2.0.0
Direct dependency fix Resolution (react-scripts): 5.0.1
Fix Resolution (loader-utils): 2.0.0
Direct dependency fix Resolution (react-scripts): 5.0.1
***
Step up your Open Source Security Game with Mend [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)",True,"CVE-2022-37601 (High) detected in loader-utils-1.4.0.tgz, loader-utils-1.2.3.tgz - ## CVE-2022-37601 - High Severity Vulnerability
Vulnerable Libraries - loader-utils-1.4.0.tgz, loader-utils-1.2.3.tgz
Path to vulnerable library: /client/node_modules/adjust-sourcemap-loader/node_modules/loader-utils/package.json,/client/node_modules/resolve-url-loader/node_modules/loader-utils/package.json,/client/node_modules/react-dev-utils/node_modules/loader-utils/package.json
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: High
- Integrity Impact: High
- Availability Impact: High
For more information on CVSS3 Scores, click here.
Suggested Fix
Type: Upgrade version
Release Date: 2022-10-12
Fix Resolution (loader-utils): 2.0.0
Direct dependency fix Resolution (react-scripts): 5.0.1
Fix Resolution (loader-utils): 2.0.0
Direct dependency fix Resolution (react-scripts): 5.0.1
***
Step up your Open Source Security Game with Mend [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)",0,cve high detected in loader utils tgz loader utils tgz cve high severity vulnerability vulnerable libraries loader utils tgz loader utils tgz loader utils tgz utils for webpack loaders library home page a href path to dependency file client package json path to vulnerable library client node modules loader utils package json dependency hierarchy react scripts tgz root library css loader tgz x loader utils tgz vulnerable library loader utils tgz utils for webpack loaders library home page a href path to dependency file client package json path to vulnerable library client node modules adjust sourcemap loader node modules loader utils package json client node modules resolve url loader node modules loader utils package json client node modules react dev utils node modules loader utils package json dependency hierarchy react scripts tgz root library react dev utils tgz x loader utils tgz vulnerable library found in head commit a href found in base branch master vulnerability details prototype pollution vulnerability in function parsequery in parsequery js in webpack loader utils via the name variable in parsequery js publish date url a href cvss score details base score metrics exploitability metrics attack vector network attack complexity low privileges required none user interaction none scope unchanged impact metrics confidentiality impact high integrity impact high availability impact high for more information on scores click a href suggested fix type upgrade version release date fix resolution loader utils direct dependency fix resolution react scripts fix resolution loader utils direct dependency fix resolution react scripts step up your open source security game with mend ,0
7738,25510343958.0,IssuesEvent,2022-11-28 12:39:23,red-hat-storage/ocs-ci,https://api.github.com/repos/red-hat-storage/ocs-ci,closed,PV encryption tests via UI does not skip for ODF > 4.8,ui_automation,"According to the test https://github.com/red-hat-storage/ocs-ci/blob/master/tests/ui/test_pv_encryption_ui.py, the test is expected to be skipped for ODF > 4.8. However the test is run in ODF 4.12 as seen from this run:
ocs-ci results for OCS4-12-Downstream-OCP4-12-VSPHERE6-UPI-KMS-VAULT-V1-1AZ-RHCOS-VSAN-3M-3W-tier1 (BUILD ID: 4.12.0-91 RUN ID: 1668107034)
",1.0,"PV encryption tests via UI does not skip for ODF > 4.8 - According to the test https://github.com/red-hat-storage/ocs-ci/blob/master/tests/ui/test_pv_encryption_ui.py, the test is expected to be skipped for ODF > 4.8. However the test is run in ODF 4.12 as seen from this run:
ocs-ci results for OCS4-12-Downstream-OCP4-12-VSPHERE6-UPI-KMS-VAULT-V1-1AZ-RHCOS-VSAN-3M-3W-tier1 (BUILD ID: 4.12.0-91 RUN ID: 1668107034)
",1,pv encryption tests via ui does not skip for odf according to the test the test is expected to be skipped for odf however the test is run in odf as seen from this run ocs ci results for downstream upi kms vault rhcos vsan build id run id ,1
59769,6662990658.0,IssuesEvent,2017-10-02 14:56:23,EasyRPG/Player,https://api.github.com/repos/EasyRPG/Player,closed,Message Stretch does not get restored when loading RPG_RT savegame,Patch available Savegames Testcase available,"#### Name of the game: Gromada ([download](http://tsukuru.pl/index.php?link=gra&title=2k-gromada))
#### Attach files (as a .zip archive or link them)
- A savegame next to the problem: [savegames.zip](https://github.com/EasyRPG/Player/files/1185288/gromada.zip) (I guess it is Save15, not tested)
#### Describe the issue in detail and how to reproduce it:
> The game uses tiled system graphics. EasyRPG doesn't detect that when loading a save file created by RPG_RT. This doesn't happen with save files generated by EasyRPG.
(issue reported by mail)
",1.0,"Message Stretch does not get restored when loading RPG_RT savegame - #### Name of the game: Gromada ([download](http://tsukuru.pl/index.php?link=gra&title=2k-gromada))
#### Attach files (as a .zip archive or link them)
- A savegame next to the problem: [savegames.zip](https://github.com/EasyRPG/Player/files/1185288/gromada.zip) (I guess it is Save15, not tested)
#### Describe the issue in detail and how to reproduce it:
> The game uses tiled system graphics. EasyRPG doesn't detect that when loading a save file created by RPG_RT. This doesn't happen with save files generated by EasyRPG.
(issue reported by mail)
",0,message stretch does not get restored when loading rpg rt savegame name of the game gromada attach files as a zip archive or link them a savegame next to the problem i guess it is not tested describe the issue in detail and how to reproduce it the game uses tiled system graphics easyrpg doesn t detect that when loading a save file created by rpg rt this doesn t happen with save files generated by easyrpg issue reported by mail ,0
1245,9763496927.0,IssuesEvent,2019-06-05 13:56:46,spacemeshos/go-spacemesh,https://api.github.com/repos/spacemeshos/go-spacemesh,closed,Notify user when Docker image was not found in Docker registry,automation,"# Overview / Motivation
Tests pull Docker Images from DockerHub.
This means that the required image which appear in config.yaml of the test must already be in DockerHub.
Currently, If image does not exists, the deployment fails on timeout
# The Task
Notify the user that image does not exist in DockerHub
",1.0,"Notify user when Docker image was not found in Docker registry - # Overview / Motivation
Tests pull Docker Images from DockerHub.
This means that the required image which appear in config.yaml of the test must already be in DockerHub.
Currently, If image does not exists, the deployment fails on timeout
# The Task
Notify the user that image does not exist in DockerHub
",1,notify user when docker image was not found in docker registry overview motivation tests pull docker images from dockerhub this means that the required image which appear in config yaml of the test must already be in dockerhub currently if image does not exists the deployment fails on timeout the task notify the user that image does not exist in dockerhub ,1
622676,19653756785.0,IssuesEvent,2022-01-10 10:17:41,EscolaLMS/Admin,https://api.github.com/repos/EscolaLMS/Admin,closed,[LMS Admin] #39 Nie można wyczyścić wartości dla...,high priority priority high,"Nie można wyczyścić wartości dla pola additional_fields_required
*Source url:* https://admin-stage.escolalms.com/#/settings/escola_auth
*Reported by:*
*Reported at:* 30 Dec at 14:22 UTC
*Console:* [1× Error](https://ybug.io/dashboard/reports/detail/qvd6naj8y7a4y2p76qn7#console)
*Location:* PL, Lesser Poland, Stary Sacz
*Browser:* Chrome 96.0.4664.110
*OS:* macOS 10.15.7
*Screen:* 1920x1080
*Viewport:* 1920x1001
*Screenshot:*

For more details please visit report page on Ybug:
[https://ybug.io/dashboard/reports/detail/qvd6naj8y7a4y2p76qn7](https://ybug.io/dashboard/reports/detail/qvd6naj8y7a4y2p76qn7)
",2.0,"[LMS Admin] #39 Nie można wyczyścić wartości dla... - Nie można wyczyścić wartości dla pola additional_fields_required
*Source url:* https://admin-stage.escolalms.com/#/settings/escola_auth
*Reported by:*
*Reported at:* 30 Dec at 14:22 UTC
*Console:* [1× Error](https://ybug.io/dashboard/reports/detail/qvd6naj8y7a4y2p76qn7#console)
*Location:* PL, Lesser Poland, Stary Sacz
*Browser:* Chrome 96.0.4664.110
*OS:* macOS 10.15.7
*Screen:* 1920x1080
*Viewport:* 1920x1001
*Screenshot:*

For more details please visit report page on Ybug:
[https://ybug.io/dashboard/reports/detail/qvd6naj8y7a4y2p76qn7](https://ybug.io/dashboard/reports/detail/qvd6naj8y7a4y2p76qn7)
",0, nie można wyczyścić wartości dla nie można wyczyścić wartości dla pola additional fields required source url reported by reported at dec at utc console location pl lesser poland stary sacz browser chrome os macos screen viewport screenshot for more details please visit report page on ybug ,0
4870,17871310175.0,IssuesEvent,2021-09-06 15:56:30,betagouv/preuve-covoiturage,https://api.github.com/repos/betagouv/preuve-covoiturage,reopened,Paramétrage S3 pour la durée de vie des exports open data et CNAME,INFRA Open Data Automation,"- [ ] Utiliser le s3 public pour les export opendata
- [ ] Enregistrer un CNAME pour le S3 public & export",1.0,"Paramétrage S3 pour la durée de vie des exports open data et CNAME - - [ ] Utiliser le s3 public pour les export opendata
- [ ] Enregistrer un CNAME pour le S3 public & export",1,paramétrage pour la durée de vie des exports open data et cname utiliser le public pour les export opendata enregistrer un cname pour le public export,1
139940,20984011223.0,IssuesEvent,2022-03-28 23:41:36,dotnet/upgrade-assistant,https://api.github.com/repos/dotnet/upgrade-assistant,opened,upgrade-assistant upgrade . This tool is not supported on non-Windows platforms due to dependencies on Visual Studio.,design-proposal,"
## Summary
How about putting that higher in the README.MD, rather than having us assume it will work on Linux/Mac ootb?
## Motivation and goals
10-15 minutes time wasted to discover the tool isn't as XPLAT as it might be?
## In scope
A list of major scenarios, perhaps in priority order.
## Out of scope
Scenarios you explicitly want to exclude.
## Risks / unknowns
How might developers misinterpret/misuse this? How might implementing it restrict us from other enhancements in the future? Also list any perf/security/correctness concerns.
## Examples
Give brief examples of possible developer experiences (e.g., code they would write).
Don't be deeply concerned with how it would be implemented yet. Your examples could even be from other technology stacks.
",1.0,"upgrade-assistant upgrade . This tool is not supported on non-Windows platforms due to dependencies on Visual Studio. -
## Summary
How about putting that higher in the README.MD, rather than having us assume it will work on Linux/Mac ootb?
## Motivation and goals
10-15 minutes time wasted to discover the tool isn't as XPLAT as it might be?
## In scope
A list of major scenarios, perhaps in priority order.
## Out of scope
Scenarios you explicitly want to exclude.
## Risks / unknowns
How might developers misinterpret/misuse this? How might implementing it restrict us from other enhancements in the future? Also list any perf/security/correctness concerns.
## Examples
Give brief examples of possible developer experiences (e.g., code they would write).
Don't be deeply concerned with how it would be implemented yet. Your examples could even be from other technology stacks.
",0,upgrade assistant upgrade this tool is not supported on non windows platforms due to dependencies on visual studio this template is useful to build consensus about whether work should be done and if so the high level shape of how it should be approached use this before fixating on a particular implementation summary how about putting that higher in the readme md rather than having us assume it will work on linux mac ootb motivation and goals minutes time wasted to discover the tool isn t as xplat as it might be in scope a list of major scenarios perhaps in priority order out of scope scenarios you explicitly want to exclude risks unknowns how might developers misinterpret misuse this how might implementing it restrict us from other enhancements in the future also list any perf security correctness concerns examples give brief examples of possible developer experiences e g code they would write don t be deeply concerned with how it would be implemented yet your examples could even be from other technology stacks detailed design it s often best not to fill this out until you get basic consensus about the above when you do consider adding an implementation proposal with the following headings detailed design drawbacks considered alternatives open questions references if there s one clear design you have consensus on you could do that directly in a pr ,0
7002,24110414351.0,IssuesEvent,2022-09-20 10:52:03,mlcommons/ck,https://api.github.com/repos/mlcommons/ck,closed,[CK2/CM] Can we have post_preprocess_deps and pre_postprocess_deps?,enhancement cm-script-automation,"Currently we have ""deps"" which are executed before a script invocation and ""post_deps"" which are executed after a script invocation. But we do not have an option to run a cm script immediately after the preprocess function or immediately before the postprocess function. I think it'll be a good idea to have `post_preprocess_deps` and `pre_postprocess_deps` to add this functionality. This functionality can be useful in the following scenario
We have an application code in Python and C (both as independent CM scripts) and a CM wrapper script for both. We can now prepare the language independent inputs for the application in the preprocess function and depending on the language chosen (handled as variations) we can call the respective scripts as `post_preprocess_deps` which will do the actual application run and finally in the postprocess function we can process the produced outputs. Similarly, `pre_postprocess_deps` can also be useful as in the case of `python-venv`.",1.0,"[CK2/CM] Can we have post_preprocess_deps and pre_postprocess_deps? - Currently we have ""deps"" which are executed before a script invocation and ""post_deps"" which are executed after a script invocation. But we do not have an option to run a cm script immediately after the preprocess function or immediately before the postprocess function. I think it'll be a good idea to have `post_preprocess_deps` and `pre_postprocess_deps` to add this functionality. This functionality can be useful in the following scenario
We have an application code in Python and C (both as independent CM scripts) and a CM wrapper script for both. We can now prepare the language independent inputs for the application in the preprocess function and depending on the language chosen (handled as variations) we can call the respective scripts as `post_preprocess_deps` which will do the actual application run and finally in the postprocess function we can process the produced outputs. Similarly, `pre_postprocess_deps` can also be useful as in the case of `python-venv`.",1, can we have post preprocess deps and pre postprocess deps currently we have deps which are executed before a script invocation and post deps which are executed after a script invocation but we do not have an option to run a cm script immediately after the preprocess function or immediately before the postprocess function i think it ll be a good idea to have post preprocess deps and pre postprocess deps to add this functionality this functionality can be useful in the following scenario we have an application code in python and c both as independent cm scripts and a cm wrapper script for both we can now prepare the language independent inputs for the application in the preprocess function and depending on the language chosen handled as variations we can call the respective scripts as post preprocess deps which will do the actual application run and finally in the postprocess function we can process the produced outputs similarly pre postprocess deps can also be useful as in the case of python venv ,1
5435,19593871874.0,IssuesEvent,2022-01-05 15:43:55,nautobot/nautobot,https://api.github.com/repos/nautobot/nautobot,opened,Job aborts when run with value None for an optional ObjectVar,type: bug group: automation,"### Environment
* Python version: 3.6
* Nautobot version: 1.2.2
### Steps to Reproduce
1. Define the following Job:
```python
from nautobot.extras.jobs import Job, ObjectVar
from nautobot.dcim.models import Region
class OptionalObjectVar(Job):
region = ObjectVar(
description=""Region (optional)"",
model=Region,
required=False,
)
def run(self, data, commit):
self.log_info(obj=data[""region""], message=""The Region if any that the user provided."")
```
2. Run this job after selecting a specific Region from the presented dropdown and verify that the job executes successfully.
3. Run this job again without selecting any Region from the dropdown.
### Expected Behavior
Job to run successfully and log the expected info message.
### Observed Behavior
Job errors out before running, with the following traceback:

```
Traceback (most recent call last):
File ""/source/nautobot/extras/jobs.py"", line 988, in run_job
data = job_class.deserialize_data(data)
File ""/source/nautobot/extras/jobs.py"", line 340, in deserialize_data
return_data[field_name] = var.field_attrs[""queryset""].get(pk=value)
File ""/usr/local/lib/python3.9/site-packages/cacheops/query.py"", line 353, in get
return qs._no_monkey.get(qs, *args, **kwargs)
File ""/usr/local/lib/python3.9/site-packages/django/db/models/query.py"", line 429, in get
raise self.model.DoesNotExist(
nautobot.dcim.models.sites.Region.DoesNotExist: Region matching query does not exist.
```
This appears to be a bug or oversight in the implementation of `Job.deserialize_data()`.",1.0,"Job aborts when run with value None for an optional ObjectVar - ### Environment
* Python version: 3.6
* Nautobot version: 1.2.2
### Steps to Reproduce
1. Define the following Job:
```python
from nautobot.extras.jobs import Job, ObjectVar
from nautobot.dcim.models import Region
class OptionalObjectVar(Job):
region = ObjectVar(
description=""Region (optional)"",
model=Region,
required=False,
)
def run(self, data, commit):
self.log_info(obj=data[""region""], message=""The Region if any that the user provided."")
```
2. Run this job after selecting a specific Region from the presented dropdown and verify that the job executes successfully.
3. Run this job again without selecting any Region from the dropdown.
### Expected Behavior
Job to run successfully and log the expected info message.
### Observed Behavior
Job errors out before running, with the following traceback:

```
Traceback (most recent call last):
File ""/source/nautobot/extras/jobs.py"", line 988, in run_job
data = job_class.deserialize_data(data)
File ""/source/nautobot/extras/jobs.py"", line 340, in deserialize_data
return_data[field_name] = var.field_attrs[""queryset""].get(pk=value)
File ""/usr/local/lib/python3.9/site-packages/cacheops/query.py"", line 353, in get
return qs._no_monkey.get(qs, *args, **kwargs)
File ""/usr/local/lib/python3.9/site-packages/django/db/models/query.py"", line 429, in get
raise self.model.DoesNotExist(
nautobot.dcim.models.sites.Region.DoesNotExist: Region matching query does not exist.
```
This appears to be a bug or oversight in the implementation of `Job.deserialize_data()`.",1,job aborts when run with value none for an optional objectvar environment python version nautobot version steps to reproduce define the following job python from nautobot extras jobs import job objectvar from nautobot dcim models import region class optionalobjectvar job region objectvar description region optional model region required false def run self data commit self log info obj data message the region if any that the user provided run this job after selecting a specific region from the presented dropdown and verify that the job executes successfully run this job again without selecting any region from the dropdown expected behavior job to run successfully and log the expected info message observed behavior job errors out before running with the following traceback traceback most recent call last file source nautobot extras jobs py line in run job data job class deserialize data data file source nautobot extras jobs py line in deserialize data return data var field attrs get pk value file usr local lib site packages cacheops query py line in get return qs no monkey get qs args kwargs file usr local lib site packages django db models query py line in get raise self model doesnotexist nautobot dcim models sites region doesnotexist region matching query does not exist this appears to be a bug or oversight in the implementation of job deserialize data ,1
43918,17769687470.0,IssuesEvent,2021-08-30 12:13:29,hashicorp/terraform-provider-aws,https://api.github.com/repos/hashicorp/terraform-provider-aws,closed,aws_pinpoint_email_channel configuration set should use name instead of ARN,bug service/pinpoint,"
### Community Note
* Please vote on this issue by adding a 👍 [reaction](https://blog.github.com/2016-03-10-add-reactions-to-pull-requests-issues-and-comments/) to the original issue to help the community and maintainers prioritize this request
* Please do not leave ""+1"" or other comments that do not add relevant new information or questions, they generate extra noise for issue followers and do not help prioritize the request
* If you are interested in working on this issue or have submitted a pull request, please leave a comment
### Terraform CLI and Terraform AWS Provider Version
```
Terraform v0.14.8
+ provider registry.terraform.io/hashicorp/aws v3.37.0
```
### Affected Resource(s)
* aws_pinpoint_email_channel
### Terraform Configuration Files
Please include all Terraform configurations required to reproduce the bug. Bug reports without a functional reproduction may be closed without investigation.
This example is based off of the example configuration from the terraform docs:
https://registry.terraform.io/providers/hashicorp/aws/latest/docs/resources/pinpoint_email_channel
```hcl
resource ""aws_pinpoint_email_channel"" ""email"" {
application_id = aws_pinpoint_app.app.application_id
configuration_set = aws_ses_configuration_set.test.arn
from_address = ""user@example.com""
role_arn = aws_iam_role.role.arn
}
resource ""aws_ses_configuration_set"" ""test"" {
name = ""some-configuration-set-test""
}
resource ""aws_pinpoint_app"" ""app"" {}
resource ""aws_ses_domain_identity"" ""identity"" {
domain = ""example.com""
}
resource ""aws_iam_role"" ""role"" {
assume_role_policy = <
The expected behaviors is to be able to seamlessly configure a Pinpoint email channel using an SES configuration set passed in via the configuration_set property of the aws_pinpoint_email_channel resource.
After discussing with AWS support, we've come to the conclusion that [the documentation (and implementation) for specifying a configuration set to Pinpoint ](https://registry.terraform.io/providers/hashicorp/aws/latest/docs/resources/pinpoint_email_channel)is incorrect. Instead of supplying the ARN of the configuration set, the name of the configuration set should be given instead. The [documentation for the AWS Pinpoint Email Channel API](https://docs.aws.amazon.com/pinpoint/latest/apireference/apps-application-id-channels-email.html) is a little vague in that it doesn't directly specify whether to use the name or ARN of the configuration set, but the [documentation for the ConfigurationSet property](https://docs.aws.amazon.com/ses/latest/APIReference/API_ConfigurationSet.html) clearly states that it is the name of the configuration set and not the ARN.
### Actual Behavior
When I supplied the ARN of the config set (as terraform currently expects), the pinpoint project completely broke, meaning no emails were being sent. Furthermore, trying to edit the ""Open and click tracking settings"" from the Pinpoint email channel dashboard resulted in a series of `bad request` errors. We were able to resolve this by using the [`update-email-channel` AWS CLI command](https://docs.aws.amazon.com/cli/latest/reference/pinpoint/update-email-channel.html) to manually specify the configuration set name instead of the ARN to the email channel. After doing so, everything worked as expected.
### Steps to Reproduce
1. Create a [pinpoint app,](https://registry.terraform.io/providers/hashicorp/aws/latest/docs/resources/pinpoint_app) a [pinpoint email channel](https://registry.terraform.io/providers/hashicorp/aws/latest/docs/resources/pinpoint_email_channel), and an [SES configuration set](https://registry.terraform.io/providers/hashicorp/aws/latest/docs/resources/ses_configuration_set)
2. In the aws_pinpoint_email_channel resource, when adding a configuration set, specify the ARN of the configuration set you created.
3. Apply changes",1.0,"aws_pinpoint_email_channel configuration set should use name instead of ARN -
### Community Note
* Please vote on this issue by adding a 👍 [reaction](https://blog.github.com/2016-03-10-add-reactions-to-pull-requests-issues-and-comments/) to the original issue to help the community and maintainers prioritize this request
* Please do not leave ""+1"" or other comments that do not add relevant new information or questions, they generate extra noise for issue followers and do not help prioritize the request
* If you are interested in working on this issue or have submitted a pull request, please leave a comment
### Terraform CLI and Terraform AWS Provider Version
```
Terraform v0.14.8
+ provider registry.terraform.io/hashicorp/aws v3.37.0
```
### Affected Resource(s)
* aws_pinpoint_email_channel
### Terraform Configuration Files
Please include all Terraform configurations required to reproduce the bug. Bug reports without a functional reproduction may be closed without investigation.
This example is based off of the example configuration from the terraform docs:
https://registry.terraform.io/providers/hashicorp/aws/latest/docs/resources/pinpoint_email_channel
```hcl
resource ""aws_pinpoint_email_channel"" ""email"" {
application_id = aws_pinpoint_app.app.application_id
configuration_set = aws_ses_configuration_set.test.arn
from_address = ""user@example.com""
role_arn = aws_iam_role.role.arn
}
resource ""aws_ses_configuration_set"" ""test"" {
name = ""some-configuration-set-test""
}
resource ""aws_pinpoint_app"" ""app"" {}
resource ""aws_ses_domain_identity"" ""identity"" {
domain = ""example.com""
}
resource ""aws_iam_role"" ""role"" {
assume_role_policy = <
The expected behaviors is to be able to seamlessly configure a Pinpoint email channel using an SES configuration set passed in via the configuration_set property of the aws_pinpoint_email_channel resource.
After discussing with AWS support, we've come to the conclusion that [the documentation (and implementation) for specifying a configuration set to Pinpoint ](https://registry.terraform.io/providers/hashicorp/aws/latest/docs/resources/pinpoint_email_channel)is incorrect. Instead of supplying the ARN of the configuration set, the name of the configuration set should be given instead. The [documentation for the AWS Pinpoint Email Channel API](https://docs.aws.amazon.com/pinpoint/latest/apireference/apps-application-id-channels-email.html) is a little vague in that it doesn't directly specify whether to use the name or ARN of the configuration set, but the [documentation for the ConfigurationSet property](https://docs.aws.amazon.com/ses/latest/APIReference/API_ConfigurationSet.html) clearly states that it is the name of the configuration set and not the ARN.
### Actual Behavior
When I supplied the ARN of the config set (as terraform currently expects), the pinpoint project completely broke, meaning no emails were being sent. Furthermore, trying to edit the ""Open and click tracking settings"" from the Pinpoint email channel dashboard resulted in a series of `bad request` errors. We were able to resolve this by using the [`update-email-channel` AWS CLI command](https://docs.aws.amazon.com/cli/latest/reference/pinpoint/update-email-channel.html) to manually specify the configuration set name instead of the ARN to the email channel. After doing so, everything worked as expected.
### Steps to Reproduce
1. Create a [pinpoint app,](https://registry.terraform.io/providers/hashicorp/aws/latest/docs/resources/pinpoint_app) a [pinpoint email channel](https://registry.terraform.io/providers/hashicorp/aws/latest/docs/resources/pinpoint_email_channel), and an [SES configuration set](https://registry.terraform.io/providers/hashicorp/aws/latest/docs/resources/ses_configuration_set)
2. In the aws_pinpoint_email_channel resource, when adding a configuration set, specify the ARN of the configuration set you created.
3. Apply changes",0,aws pinpoint email channel configuration set should use name instead of arn please note the following potential times when an issue might be in terraform core or resource ordering issues and issues issues issues spans resources across multiple providers if you are running into one of these scenarios we recommend opening an issue in the instead community note please vote on this issue by adding a 👍 to the original issue to help the community and maintainers prioritize this request please do not leave or other comments that do not add relevant new information or questions they generate extra noise for issue followers and do not help prioritize the request if you are interested in working on this issue or have submitted a pull request please leave a comment terraform cli and terraform aws provider version terraform provider registry terraform io hashicorp aws affected resource s aws pinpoint email channel terraform configuration files please include all terraform configurations required to reproduce the bug bug reports without a functional reproduction may be closed without investigation this example is based off of the example configuration from the terraform docs hcl resource aws pinpoint email channel email application id aws pinpoint app app application id configuration set aws ses configuration set test arn from address user example com role arn aws iam role role arn resource aws ses configuration set test name some configuration set test resource aws pinpoint app app resource aws ses domain identity identity domain example com resource aws iam role role assume role policy eof version statement action sts assumerole principal service pinpoint amazonaws com effect allow sid eof resource aws iam role policy role policy name role policy role aws iam role role id policy eof version statement action mobileanalytics putevents mobileanalytics putitems effect allow resource eof expected behavior the expected behaviors is to be able to seamlessly configure a pinpoint email channel using an ses configuration set passed in via the configuration set property of the aws pinpoint email channel resource after discussing with aws support we ve come to the conclusion that incorrect instead of supplying the arn of the configuration set the name of the configuration set should be given instead the is a little vague in that it doesn t directly specify whether to use the name or arn of the configuration set but the clearly states that it is the name of the configuration set and not the arn actual behavior when i supplied the arn of the config set as terraform currently expects the pinpoint project completely broke meaning no emails were being sent furthermore trying to edit the open and click tracking settings from the pinpoint email channel dashboard resulted in a series of bad request errors we were able to resolve this by using the to manually specify the configuration set name instead of the arn to the email channel after doing so everything worked as expected steps to reproduce create a a and an in the aws pinpoint email channel resource when adding a configuration set specify the arn of the configuration set you created apply changes,0
7378,24743767268.0,IssuesEvent,2022-10-21 08:01:20,Azure/azure-sdk-tools,https://api.github.com/repos/Azure/azure-sdk-tools,closed,"azure-sdk-for-net PR check in azure-rest-api-specs fails with ""permission denied""",SDK Automation,"The azure-sdk-for net PR check is failing in several PRs in the azure-rest-api-specs repo, e.g.
- [# 18174](https://github.com/Azure/azure-rest-api-specs/pull/18174/checks?check_run_id=6031561755)
- [# 18603](https://github.com/Azure/azure-rest-api-specs/pull/18603/checks?check_run_id=6031991775)
",1.0,"azure-sdk-for-net PR check in azure-rest-api-specs fails with ""permission denied"" - The azure-sdk-for net PR check is failing in several PRs in the azure-rest-api-specs repo, e.g.
- [# 18174](https://github.com/Azure/azure-rest-api-specs/pull/18174/checks?check_run_id=6031561755)
- [# 18603](https://github.com/Azure/azure-rest-api-specs/pull/18603/checks?check_run_id=6031991775)
",1,azure sdk for net pr check in azure rest api specs fails with permission denied the azure sdk for net pr check is failing in several prs in the azure rest api specs repo e g img width alt image src ,1
3217,13206167516.0,IssuesEvent,2020-08-14 19:34:34,coq-community/manifesto,https://api.github.com/repos/coq-community/manifesto,closed,Add Coq to Travis CI,automation,"## Meta-issue ##
Our current template builds everything in Docker, which is less flexible than what Travis can do with other languages. It'll be nice if we could have `language: coq` and run scripts conveniently.
Travis allows [community-supported languages](//docs.travis-ci.com/user/languages/community-supported-languages), and our community seems the right people to do that.
Similar issue in OCaml world: ocaml/ocaml-ci-scripts#53",1.0,"Add Coq to Travis CI - ## Meta-issue ##
Our current template builds everything in Docker, which is less flexible than what Travis can do with other languages. It'll be nice if we could have `language: coq` and run scripts conveniently.
Travis allows [community-supported languages](//docs.travis-ci.com/user/languages/community-supported-languages), and our community seems the right people to do that.
Similar issue in OCaml world: ocaml/ocaml-ci-scripts#53",1,add coq to travis ci meta issue our current template builds everything in docker which is less flexible than what travis can do with other languages it ll be nice if we could have language coq and run scripts conveniently travis allows docs travis ci com user languages community supported languages and our community seems the right people to do that similar issue in ocaml world ocaml ocaml ci scripts ,1
666,7745646186.0,IssuesEvent,2018-05-29 19:00:38,pypa/pip,https://api.github.com/repos/pypa/pip,closed,Investigate why the pypy3 CI job is timing out,C: automation P: pypy needs triage,"The CI job for pypy3 is timing out very often; I've restarted at least 10 CI builds as a result of this issue.
I'm opening this issue to bring it to the notice of others since I'm unable to figure out what exactly the problem is here.
If nothing else, this would make it more encouraging for myself to look into this when I have the time because closing issues has something satisfying to it. :P
",1.0,"Investigate why the pypy3 CI job is timing out - The CI job for pypy3 is timing out very often; I've restarted at least 10 CI builds as a result of this issue.
I'm opening this issue to bring it to the notice of others since I'm unable to figure out what exactly the problem is here.
If nothing else, this would make it more encouraging for myself to look into this when I have the time because closing issues has something satisfying to it. :P
",1,investigate why the ci job is timing out the ci job for is timing out very often i ve restarted at least ci builds as a result of this issue i m opening this issue to bring it to the notice of others since i m unable to figure out what exactly the problem is here if nothing else this would make it more encouraging for myself to look into this when i have the time because closing issues has something satisfying to it p ,1
148271,11845418859.0,IssuesEvent,2020-03-24 08:19:38,microsoft/AzureStorageExplorer,https://api.github.com/repos/microsoft/AzureStorageExplorer,opened,An error arises when creating shared access signature for one regular blob container,:beetle: regression :gear: blobs 🧪 testing,"**Storage Explorer Version:** 1.12.0
**Build**: [20200324.2](https://devdiv.visualstudio.com/DevDiv/_build/results?buildId=3577314)
**Branch**: master
**Platform/OS:** Windows 10/ Linux Ubuntu 18.04/ macOS High Sierra
**Architecture**: ia32/x64
**Regression From:** Previous release(1.12.0)
**Steps to reproduce:**
1. Expand one non-ADLS Gen2 storage account -> Blob Containers.
2. Create a new blob container -> Right click it.
3. Click 'Get Shared Access Signature...' -> Click 'Next' on the popped dialog.
4. Check the result.
**Expect Experience:**
No error arises.
**Actual Experience:**
The below error arises.

**More Info:**
This issue also reproduces for blobs.",1.0,"An error arises when creating shared access signature for one regular blob container - **Storage Explorer Version:** 1.12.0
**Build**: [20200324.2](https://devdiv.visualstudio.com/DevDiv/_build/results?buildId=3577314)
**Branch**: master
**Platform/OS:** Windows 10/ Linux Ubuntu 18.04/ macOS High Sierra
**Architecture**: ia32/x64
**Regression From:** Previous release(1.12.0)
**Steps to reproduce:**
1. Expand one non-ADLS Gen2 storage account -> Blob Containers.
2. Create a new blob container -> Right click it.
3. Click 'Get Shared Access Signature...' -> Click 'Next' on the popped dialog.
4. Check the result.
**Expect Experience:**
No error arises.
**Actual Experience:**
The below error arises.

**More Info:**
This issue also reproduces for blobs.",0,an error arises when creating shared access signature for one regular blob container storage explorer version build branch master platform os windows linux ubuntu macos high sierra architecture regression from previous release steps to reproduce expand one non adls storage account blob containers create a new blob container right click it click get shared access signature click next on the popped dialog check the result expect experience no error arises actual experience the below error arises more info this issue also reproduces for blobs ,0
135019,10959850927.0,IssuesEvent,2019-11-27 12:20:24,raiden-network/raiden,https://api.github.com/repos/raiden-network/raiden,closed,Add scenario for stressing a hub node,Component / Scenario Player Flag / Testing,"### Introduction
As part of https://github.com/raiden-network/team/issues/664 it was discovered that we need to have the https://github.com/raiden-network/raiden/blob/develop/raiden/tests/scenarios/ci/Scenario-Stress-Hub.yaml scenario updated to be run as part of the nigthly scenarios. This is done to make sure that nodes are able to handle high loads.
### Description
It should suffice to adjust the already existing scenario to comply with the current standards we use for the other scenarios. Some asserts should however be added in the end in order to verify that everything worked as expected. ",1.0,"Add scenario for stressing a hub node - ### Introduction
As part of https://github.com/raiden-network/team/issues/664 it was discovered that we need to have the https://github.com/raiden-network/raiden/blob/develop/raiden/tests/scenarios/ci/Scenario-Stress-Hub.yaml scenario updated to be run as part of the nigthly scenarios. This is done to make sure that nodes are able to handle high loads.
### Description
It should suffice to adjust the already existing scenario to comply with the current standards we use for the other scenarios. Some asserts should however be added in the end in order to verify that everything worked as expected. ",0,add scenario for stressing a hub node introduction as part of it was discovered that we need to have the scenario updated to be run as part of the nigthly scenarios this is done to make sure that nodes are able to handle high loads description it should suffice to adjust the already existing scenario to comply with the current standards we use for the other scenarios some asserts should however be added in the end in order to verify that everything worked as expected ,0
433466,12505816358.0,IssuesEvent,2020-06-02 11:26:40,OpenNebula/one,https://api.github.com/repos/OpenNebula/one,closed,Implement a find by criteria for OpenNebula resources,Category: Core & System Priority: Low Status: Accepted Type: Backlog,"---
Author Name: **OpenNebula Systems Support Team** (OpenNebula Systems Support Team)
Original Redmine Issue: 5462, https://dev.opennebula.org/issues/5462
Original Date: 2017-10-17
---
For instance, find a VM with a particular tag, or a particular NIC.
",1.0,"Implement a find by criteria for OpenNebula resources - ---
Author Name: **OpenNebula Systems Support Team** (OpenNebula Systems Support Team)
Original Redmine Issue: 5462, https://dev.opennebula.org/issues/5462
Original Date: 2017-10-17
---
For instance, find a VM with a particular tag, or a particular NIC.
",0,implement a find by criteria for opennebula resources author name opennebula systems support team opennebula systems support team original redmine issue original date for instance find a vm with a particular tag or a particular nic ,0
114193,4621303284.0,IssuesEvent,2016-09-27 00:30:29,4-20ma/i2c_adc_ads7828,https://api.github.com/repos/4-20ma/i2c_adc_ads7828,opened,Update README,Priority: Low Type: Maintenance," Match style/content of `ModbusMaster`
- [ ] use `.md` extension
- [ ] add standard title
- [ ] add badges
- [ ] 2 spaces before ##
- [ ] update sections:
- Overview
- Features (add device address: 0x20)
- Installation (update)
- Schematic (combine with Hardware)
- Example
- Caveats (add)
- Support (update, remove Questions/Feedback)
- [ ] convert backtick block language to cpp
- [ ] superscript i2c
- [ ] backtick `i2c_adc_ads7828`
- [ ] remove deprecated INSTALL
- The README provide 3 installation methods, including links to arduino.cc.
",1.0,"Update README - Match style/content of `ModbusMaster`
- [ ] use `.md` extension
- [ ] add standard title
- [ ] add badges
- [ ] 2 spaces before ##
- [ ] update sections:
- Overview
- Features (add device address: 0x20)
- Installation (update)
- Schematic (combine with Hardware)
- Example
- Caveats (add)
- Support (update, remove Questions/Feedback)
- [ ] convert backtick block language to cpp
- [ ] superscript i2c
- [ ] backtick `i2c_adc_ads7828`
- [ ] remove deprecated INSTALL
- The README provide 3 installation methods, including links to arduino.cc.
",0,update readme match style content of modbusmaster use md extension add standard title add badges spaces before update sections overview features add device address installation update schematic combine with hardware example caveats add support update remove questions feedback convert backtick block language to cpp superscript i c backtick adc remove deprecated install the readme provide installation methods including links to arduino cc ,0
23974,12167293930.0,IssuesEvent,2020-04-27 10:39:39,microsoft/react-native-windows,https://api.github.com/repos/microsoft/react-native-windows,closed,Move AsyncStorageManagerWin32 to run on a background thread,Area: Performance Platform: Desktop,"ASMW32 currently blocks on SQLite APIs completing, which may take a relatively long time (however long the disk IO takes). The implementation of ASM on Android & iOS do their disk IO asynchronously. We should make ASMW32 asynchronous for parity with those platforms.",True,"Move AsyncStorageManagerWin32 to run on a background thread - ASMW32 currently blocks on SQLite APIs completing, which may take a relatively long time (however long the disk IO takes). The implementation of ASM on Android & iOS do their disk IO asynchronously. We should make ASMW32 asynchronous for parity with those platforms.",0,move to run on a background thread currently blocks on sqlite apis completing which may take a relatively long time however long the disk io takes the implementation of asm on android ios do their disk io asynchronously we should make asynchronous for parity with those platforms ,0
83586,3637693128.0,IssuesEvent,2016-02-12 12:14:54,x-team/unleash,https://api.github.com/repos/x-team/unleash,closed,Smaller cards?,enhancement low priority,"I'd like to experiment with smaller cards; right now they seem a little bit clumsy.

",1.0,"Smaller cards? - I'd like to experiment with smaller cards; right now they seem a little bit clumsy.

",0,smaller cards i d like to experiment with smaller cards right now they seem a little bit clumsy ,0
130922,12465916336.0,IssuesEvent,2020-05-28 14:43:46,twoodby/github_base,https://api.github.com/repos/twoodby/github_base,opened,Updated readme,documentation,Update the readme to explain the new branch tasks and how to setup to use tasks,1.0,Updated readme - Update the readme to explain the new branch tasks and how to setup to use tasks,0,updated readme update the readme to explain the new branch tasks and how to setup to use tasks,0
271355,29477928539.0,IssuesEvent,2023-06-02 01:05:06,samq-ghdemo/SEARCH-NCJIS-nibrs,https://api.github.com/repos/samq-ghdemo/SEARCH-NCJIS-nibrs,opened,CVE-2021-23445 (Medium) detected in datatables-1.10.15.jar,Mend: dependency security vulnerability,"## CVE-2021-23445 - Medium Severity Vulnerability
Vulnerable Library - datatables-1.10.15.jar
Path to vulnerable library: /canner/.m2/repository/org/webjars/datatables/1.10.15/datatables-1.10.15.jar,/web/nibrs-web/target/nibrs-web/WEB-INF/lib/datatables-1.10.15.jar
This affects the package datatables.net before 1.11.3. If an array is passed to the HTML escape entities function it would not have its contents escaped.
***
- [ ] Check this box to open an automated fix PR
",True,"CVE-2021-23445 (Medium) detected in datatables-1.10.15.jar - ## CVE-2021-23445 - Medium Severity Vulnerability
Vulnerable Library - datatables-1.10.15.jar
Path to vulnerable library: /canner/.m2/repository/org/webjars/datatables/1.10.15/datatables-1.10.15.jar,/web/nibrs-web/target/nibrs-web/WEB-INF/lib/datatables-1.10.15.jar
This affects the package datatables.net before 1.11.3. If an array is passed to the HTML escape entities function it would not have its contents escaped.
***
- [ ] Check this box to open an automated fix PR
",0,cve medium detected in datatables jar cve medium severity vulnerability vulnerable library datatables jar webjar for datatables library home page a href path to dependency file web nibrs web pom xml path to vulnerable library canner repository org webjars datatables datatables jar web nibrs web target nibrs web web inf lib datatables jar dependency hierarchy x datatables jar vulnerable library found in head commit a href found in base branch master vulnerability details this affects the package datatables net before if an array is passed to the html escape entities function it would not have its contents escaped publish date url a href cvss score details base score metrics exploitability metrics attack vector network attack complexity low privileges required none user interaction required scope changed impact metrics confidentiality impact low integrity impact low availability impact none for more information on scores click a href suggested fix type upgrade version origin a href release date fix resolution check this box to open an automated fix pr ,0
127449,27046850520.0,IssuesEvent,2023-02-13 10:22:31,Regalis11/Barotrauma,https://api.github.com/repos/Regalis11/Barotrauma,closed,Favorite servers list problem,Bug Need more info Code Networking Unstable,"If add a server to Favorite and restart Barotrauma server is listed as offline.
The server's endpoint in Data/favoriteservers.xml is changed from
`111.222.333.444:12345`
to
`[::ffff:111.222.333.444]:12345`
which makes not possible to connect to that server again.
It looks like all representation of IP addresses are like that (for example, command ""clientlist"" gives the same format of addresses and all log files too)",1.0,"Favorite servers list problem - If add a server to Favorite and restart Barotrauma server is listed as offline.
The server's endpoint in Data/favoriteservers.xml is changed from
`111.222.333.444:12345`
to
`[::ffff:111.222.333.444]:12345`
which makes not possible to connect to that server again.
It looks like all representation of IP addresses are like that (for example, command ""clientlist"" gives the same format of addresses and all log files too)",0,favorite servers list problem if add a server to favorite and restart barotrauma server is listed as offline the server s endpoint in data favoriteservers xml is changed from to which makes not possible to connect to that server again it looks like all representation of ip addresses are like that for example command clientlist gives the same format of addresses and all log files too ,0
272225,20738572946.0,IssuesEvent,2022-03-14 15:40:39,expertsleepersltd/issues,https://api.github.com/repos/expertsleepersltd/issues,closed,CV/MIDI docs don't mention scaling relationship between voltage and MIDI CC,documentation disting mk4,"Parameters 2 & 3, when non-zero, allow you to generate CC messages from the X & Y inputs (using the parameter value as the CC number). If Y is to be converted to a CC, then notes are no longer generated.",1.0,"CV/MIDI docs don't mention scaling relationship between voltage and MIDI CC - Parameters 2 & 3, when non-zero, allow you to generate CC messages from the X & Y inputs (using the parameter value as the CC number). If Y is to be converted to a CC, then notes are no longer generated.",0,cv midi docs don t mention scaling relationship between voltage and midi cc parameters when non zero allow you to generate cc messages from the x y inputs using the parameter value as the cc number if y is to be converted to a cc then notes are no longer generated ,0
9253,27798441167.0,IssuesEvent,2023-03-17 14:13:07,aws-samples/eks-workshop-v2,https://api.github.com/repos/aws-samples/eks-workshop-v2,opened,Upgrade ArgoCD version for ArgoCD lab,enhancement content/automation,"### What would you like to be added?
Upgrade the version of ArgoCD helm chart used in terraform eks addon for the ArgoCD lab
### Why is this needed?
Keep up to date with bug and security fixes",1.0,"Upgrade ArgoCD version for ArgoCD lab - ### What would you like to be added?
Upgrade the version of ArgoCD helm chart used in terraform eks addon for the ArgoCD lab
### Why is this needed?
Keep up to date with bug and security fixes",1,upgrade argocd version for argocd lab what would you like to be added upgrade the version of argocd helm chart used in terraform eks addon for the argocd lab why is this needed keep up to date with bug and security fixes,1
7427,24847100160.0,IssuesEvent,2022-10-26 16:43:59,bcgov/api-services-portal,https://api.github.com/repos/bcgov/api-services-portal,closed,FAILED: Automated Tests(232),automation,"Stats: {
""suites"": 51,
""tests"": 379,
""passes"": 90,
""pending"": 0,
""failures"": 232,
""start"": ""2022-10-19T17:56:13.491Z"",
""end"": ""2022-10-19T18:34:31.911Z"",
""duration"": 690871,
""testsRegistered"": 379,
""passPercent"": 23.7467018469657,
""pendingPercent"": 0,
""other"": 0,
""hasOther"": false,
""skipped"": 57,
""hasSkipped"": true
}
Failed Tests:
""update the Dataset in BC Data Catelogue to appear the API in the Directory""
""publish product to directory""
""Create a Test environment""
""applies authorization plugin to service published to Kong Gateway""
""activate the service for Test environment""
""activate the service for Dev environment""
""Grant namespace access to Mark (access manager)""
""Grant CredentialIssuer.Admin permission to Janis (API Owner)""
""Collect the credentials""
""Close the popup without collecting credentials""
""authenticates Mark (Access-Manager)""
""Verify that the request status is Pending Approval""
""Collect the credentials""
""Verify that API is not accessible with the generated API Key when the request is not approved""
""authenticates Mark (Access-Manager)""
""verify the request details""
""Add group labels in request details window""
""approves an access request""
""authenticates Mark (Access-Manager)""
""Navigate to Consumer page and filter the product""
""Click on the first consumer""
""Click on Grant Access button""
""Grant Access to Test environment""
""Verify the service is accessible with API key for elevated access""
""Verify the service is accessibale with API key for free access""
""Verify the service is accessible with API key for elevated access""
""authenticates Mark (Access Manager)""
""Navigate to Consumer page and filter the product""
""Select the consumer from the list""
""set IP address that is not accessible in the network as allowed IP and set Route as scope""
""verify IP Restriction error when the API calls other than the allowed IP""
""set IP address that is not accessible in the network as allowed IP and set service as scope""
""verify IP Restriction error when the API calls other than the allowed IP""
""set IP address that is accessible in the network as allowed IP and set route as scope""
""verify the success stats when the API calls within the allowed IP range""
""set IP address that is accessible in the network as allowed IP and set service as scope""
""verify the success stats when the API calls within the allowed IP range""
""Navigate to Consumer page and filter the product""
""set api ip-restriction to global service level""
""Verify that IP Restriction is set at global service level""
""set IP address that is not accessible in the network as allowed IP and set service as scope""
""verify IP Restriction error when the API calls other than the allowed IP""
""Navigate to Consumer page and filter the product""
""set api ip-restriction to global service level""
""Verify that IP Restriction is set at global service level""
""set IP address that is not accessible in the network as allowed IP and set service as scope""
""verify IP Restriction error when the API calls other than the allowed IP""
""authenticates Mark (Access Manager)""
""Navigate to Consumer page and filter the product""
""Select the consumer from the list ""
""set api rate limit as per the test config, Local Policy and Scope as Service""
""verify rate limit error when the API calls beyond the limit""
""set api rate limit as per the test config, Local Policy and Scope as Route""
""verify rate limit error when the API calls beyond the limit""
""set api rate limit as per the test config, Redis Policy and Scope as Service""
""verify rate limit error when the API calls beyond the limit""
""set api rate limit as per the test config, Redis Policy and Scope as Route""
""verify rate limit error when the API calls beyond the limit""
""set api rate limit to global service level""
""Verify that Rate limiting is set at global service level""
""set api rate limit as per the test config, Redis Policy and Scope as Service""
""verify rate limit error when the API calls beyond the limit""
""set api rate limit to global service level""
""Verify that Rate limiting is set at global service level""
""set api rate limit as per the test config, Redis Policy and Scope as Service""
""verify rate limit error when the API calls beyond the limit""
""creates an access request""
""authenticates Mark (Access-Manager)""
""verify the request details""
""Add group labels in request details window""
""approves an access request""
""Verify that API is accessible with the generated API Key""
""authenticates Mark (Access-Manager)""
""verify that consumers are filters as per given parameter""
""authenticates Mark (Access-Manager)""
""Navigate to Consumer page and filter the product""
""Click on the first consumer""
""Verify that labels can be deleted""
""Verify that labels can be updated""
""Verify that labels can be added""
""Grant namespace access to access manager(Mark)""
""Grant CredentialIssuer.Admin permission to credential issuer(Wendy)""
""Select the namespace created for client credential ""
""Creates authorization profile for Client ID/Secret""
""Creates authorization profile for JWT - Generated Key Pair""
""Creates authorization profile for JWKS URL""
""Creates invalid authorization profile""
""Update the Dataset in BC Data Catalogue to appear the API in the Directory""
""Adds environment with Client ID/Secret authenticator to product""
""Adds environment with JWT - Generated Key Pair authenticator to product""
""Adds environment with JWT - JWKS URL authenticator to product""
""Applies authorization plugin to service published to Kong Gateway""
""activate the service for Test environment""
""Adds environment for invalid authorization profile to other""
""Creates an access request""
""Access Manager logs in""
""Select scopes in Authorization Tab""
""approves an access request""
""Get access token using client ID and secret; make API request""
""Creates an access request""
""Access Manager logs in""
""approves an access request""
""Get access token using JWT key pair; make API request""
""Creates an access request""
""Access Manager logs in""
""approves an access request""
""Get access token using JWT key pair; make API request""
""Get current API Key""
""Regenrate credential""
""Verify that new API key is set to the consumer""
""Verify that only one API key(new key) is set to the consumer in Kong gateway""
""Regenrate credential client ID and Secret""
""Make sure that the old client ID and Secret is disabled""
""update the Dataset in BC Data Catelogue to appear the API in the Directory""
""publish product to directory""
""applies authorization plugin to service published to Kong Gateway""
""Delete Product Environment""
""Delete the Product""
""authenticates Janis (api owner) to get the user session token""
""Get the resource and verify the success code in the response""
""Get the resource and verify the success code in the response""
""Navigate to activity page""
""Developer logs in""
""Authenticates api owner""
""authenticates Harley (developer)""
""authenticates Harley (developer)""
""authenticates Harley (developer)""
""authenticates Mark (Access-Manager)""
""authenticates Harley (developer)""
""Delete application""
""Verify that application is deleted""
""Verify that API is not accessible with the generated API Key when the application is deleted""
""authenticates Janis (api owner)""
""authenticates Janis (api owner)""
""authenticates Janis (api owner)""
""authenticates Janis (api owner)""
""authenticates Harley (developer)""
""authenticates Janis (api owner)""
""Authenticates Mark (Access-Manager)""
""Navigate to Consumer Page to see the Approve Request option""
""Verify that the option to approve request is displayed""
""Authenticates Janis (api owner)""
""Authenticates Wendy (Credential-Issuer)""
""Verify that all the namespace options and activities are displayed""
""Authenticates Janis (api owner)""
""Authenticates Wendy (Credential-Issuer)""
""Verify that only Authorization Profile option is displayed in Namespace page""
""Verify that authorization profile for Client ID/Secret is generated""
""Authenticates Janis (api owner)""
""authenticates Mark""
""Navigate to Consumer Page to see the Approve Request option""
""Navigate to Consumer Page to see the Approve Request option""
""Verify that service accounts are not created""
""authenticates Janis (api owner)""
""Authenticates Wendy (Credential-Issuer)""
""Verify that GWA API allows user to publish the API to Kong gateway""
""authenticates Janis (api owner)""
""authenticates Janis (api owner)""
""Prepare the Request Specification for the API""
""Prepare the Request Specification for the API""
""authenticates Janis (api owner) to get the user session token""
""Get the resource and verify the success code in the response""
""Compare the scope values in response against the expected values""
""Get the resource and verify the success code in the response""
""Compare the Namespace values in response against the expected values""
""Delete the namespace associated with the organization, organization unit and verify the success code in the response""
""Verify that the deleted Namespace is not displayed in Get Call""
""Add the access of the organization to the specific user and verify the success code in the response""
""Get the resource and verify the success code in the response""
""Compare the Namespace values in response against the expected values""
""authenticates Janis (api owner) to get the user session token""
""Put the resource and verify the success code in the response""
""Get the resource and verify the success code in the response""
""Compare the values in response against the values passed in the request""
""Verify the status code and response message for invalid slugvalue""
""Delete the documentation""
""Delete the documentation""
""Put the resource and verify the success code in the response""
""Verify that document contant is displayed for GET /documentation""
""Verify the status code and response message for invalid slug id""
""Verify that document contant is fetch by slug ID""
""authenticates Janis (api owner) to get the user session token""
""Put the resource and verify the success code in the response""
""Get the resource and verify the success code in the response""
""Compare the values in response against the values passed in the request""
""Delete the authorization profile""
""Verify that the authorization profile is deleted""
""Put the resource and verify the success code in the response""
""Get the resource and verify the success code in the response""
""Compare the values in response against the values passed in the request""
""Delete the authorization profile""
""Verify that the authorization profile is deleted""
""Put the resource and verify the success code in the response""
""Get the resource and verify the success code in the response""
""Compare the values in response against the values passed in the request""
""Delete the authorization profile""
""Verify that the authorization profile is deleted""
""authenticates Janis (api owner) to get the user session token""
""Put the resource and verify the success code in the response""
""Get the resource and verify the success code and product name in the response""
""Compare the values in response against the values passed in the request""
""authenticates Janis (api owner) to get the user session token""
""Delete the product environment and verify the success code in the response""
""Get the resource and verify that product environment is deleted""
""Delete the product and verify the success code in the response""
""Get the resource and verify that product is deleted""
""authenticates Janis (api owner) to get the user session token""
""Put the resource (/organization/{org}/datasets) and verify the success code in the response""
""Get the resource (/organization/{org}/datasets/{name}) and verify the success code in the response""
""Compare the values in response against the values passed in the request""
""Put the resource (/namespaces/{ns}/datasets/{name}) and verify the success code in the response""
""Get the resource (/namespaces/{ns}/datasets/{name}) and verify the success code in the response""
""Compare the values in response against the values passed in the request""
""Get the resource (/organizations/{org}/datasets/{name}) and verify the success code in the response""
""Compare the values in response against the values passed in the request""
""Get the resource (/organizations/{org}/datasets) and verify the success code in the response""
""Compare the values in response against the values passed in the request""
""Get the directory details (/directory) and verify the success code in the response""
""Get the directory details by its ID (/directory/{id}) and verify the success code in the response""
""Get the namespace directory details (/namespaces/{ns}/directory) and verify the success code and empty response for the namespace with no directory""
""Get the namespace directory details (/namespaces/{ns}/directory) and verify the success code in the response""
""Get the namespace directory details by its ID (/namespaces/{ns}/directory/{id}) and verify the success code in the response""
""Get the namespace directory details (/namespaces/{ns}/directory/{id}) for non exist directory ID and verify the response code""
""Delete the dataset (/organizations/{org}/datasets/{name}) and verify the success code in the response""
""Verify that deleted dataset does not display in Get dataset list""
""authenticates Janis (api owner) to get the user session token""
""Get the resource and verify the success code in the response""
""Verify that the selected Namespace is displayed in the Response list in the response""
""Get the resource and verify the success code in the response""
""Get the resource for namespace summary and verify the success code in the response""
""Delete the namespace and verify the Validation to prevent deleting the namespace""
""Force delete the namespace and verify the success code in the response""
Run Link: https://github.com/bcgov/api-services-portal/actions/runs/3283710064",1.0,"FAILED: Automated Tests(232) - Stats: {
""suites"": 51,
""tests"": 379,
""passes"": 90,
""pending"": 0,
""failures"": 232,
""start"": ""2022-10-19T17:56:13.491Z"",
""end"": ""2022-10-19T18:34:31.911Z"",
""duration"": 690871,
""testsRegistered"": 379,
""passPercent"": 23.7467018469657,
""pendingPercent"": 0,
""other"": 0,
""hasOther"": false,
""skipped"": 57,
""hasSkipped"": true
}
Failed Tests:
""update the Dataset in BC Data Catelogue to appear the API in the Directory""
""publish product to directory""
""Create a Test environment""
""applies authorization plugin to service published to Kong Gateway""
""activate the service for Test environment""
""activate the service for Dev environment""
""Grant namespace access to Mark (access manager)""
""Grant CredentialIssuer.Admin permission to Janis (API Owner)""
""Collect the credentials""
""Close the popup without collecting credentials""
""authenticates Mark (Access-Manager)""
""Verify that the request status is Pending Approval""
""Collect the credentials""
""Verify that API is not accessible with the generated API Key when the request is not approved""
""authenticates Mark (Access-Manager)""
""verify the request details""
""Add group labels in request details window""
""approves an access request""
""authenticates Mark (Access-Manager)""
""Navigate to Consumer page and filter the product""
""Click on the first consumer""
""Click on Grant Access button""
""Grant Access to Test environment""
""Verify the service is accessible with API key for elevated access""
""Verify the service is accessibale with API key for free access""
""Verify the service is accessible with API key for elevated access""
""authenticates Mark (Access Manager)""
""Navigate to Consumer page and filter the product""
""Select the consumer from the list""
""set IP address that is not accessible in the network as allowed IP and set Route as scope""
""verify IP Restriction error when the API calls other than the allowed IP""
""set IP address that is not accessible in the network as allowed IP and set service as scope""
""verify IP Restriction error when the API calls other than the allowed IP""
""set IP address that is accessible in the network as allowed IP and set route as scope""
""verify the success stats when the API calls within the allowed IP range""
""set IP address that is accessible in the network as allowed IP and set service as scope""
""verify the success stats when the API calls within the allowed IP range""
""Navigate to Consumer page and filter the product""
""set api ip-restriction to global service level""
""Verify that IP Restriction is set at global service level""
""set IP address that is not accessible in the network as allowed IP and set service as scope""
""verify IP Restriction error when the API calls other than the allowed IP""
""Navigate to Consumer page and filter the product""
""set api ip-restriction to global service level""
""Verify that IP Restriction is set at global service level""
""set IP address that is not accessible in the network as allowed IP and set service as scope""
""verify IP Restriction error when the API calls other than the allowed IP""
""authenticates Mark (Access Manager)""
""Navigate to Consumer page and filter the product""
""Select the consumer from the list ""
""set api rate limit as per the test config, Local Policy and Scope as Service""
""verify rate limit error when the API calls beyond the limit""
""set api rate limit as per the test config, Local Policy and Scope as Route""
""verify rate limit error when the API calls beyond the limit""
""set api rate limit as per the test config, Redis Policy and Scope as Service""
""verify rate limit error when the API calls beyond the limit""
""set api rate limit as per the test config, Redis Policy and Scope as Route""
""verify rate limit error when the API calls beyond the limit""
""set api rate limit to global service level""
""Verify that Rate limiting is set at global service level""
""set api rate limit as per the test config, Redis Policy and Scope as Service""
""verify rate limit error when the API calls beyond the limit""
""set api rate limit to global service level""
""Verify that Rate limiting is set at global service level""
""set api rate limit as per the test config, Redis Policy and Scope as Service""
""verify rate limit error when the API calls beyond the limit""
""creates an access request""
""authenticates Mark (Access-Manager)""
""verify the request details""
""Add group labels in request details window""
""approves an access request""
""Verify that API is accessible with the generated API Key""
""authenticates Mark (Access-Manager)""
""verify that consumers are filters as per given parameter""
""authenticates Mark (Access-Manager)""
""Navigate to Consumer page and filter the product""
""Click on the first consumer""
""Verify that labels can be deleted""
""Verify that labels can be updated""
""Verify that labels can be added""
""Grant namespace access to access manager(Mark)""
""Grant CredentialIssuer.Admin permission to credential issuer(Wendy)""
""Select the namespace created for client credential ""
""Creates authorization profile for Client ID/Secret""
""Creates authorization profile for JWT - Generated Key Pair""
""Creates authorization profile for JWKS URL""
""Creates invalid authorization profile""
""Update the Dataset in BC Data Catalogue to appear the API in the Directory""
""Adds environment with Client ID/Secret authenticator to product""
""Adds environment with JWT - Generated Key Pair authenticator to product""
""Adds environment with JWT - JWKS URL authenticator to product""
""Applies authorization plugin to service published to Kong Gateway""
""activate the service for Test environment""
""Adds environment for invalid authorization profile to other""
""Creates an access request""
""Access Manager logs in""
""Select scopes in Authorization Tab""
""approves an access request""
""Get access token using client ID and secret; make API request""
""Creates an access request""
""Access Manager logs in""
""approves an access request""
""Get access token using JWT key pair; make API request""
""Creates an access request""
""Access Manager logs in""
""approves an access request""
""Get access token using JWT key pair; make API request""
""Get current API Key""
""Regenrate credential""
""Verify that new API key is set to the consumer""
""Verify that only one API key(new key) is set to the consumer in Kong gateway""
""Regenrate credential client ID and Secret""
""Make sure that the old client ID and Secret is disabled""
""update the Dataset in BC Data Catelogue to appear the API in the Directory""
""publish product to directory""
""applies authorization plugin to service published to Kong Gateway""
""Delete Product Environment""
""Delete the Product""
""authenticates Janis (api owner) to get the user session token""
""Get the resource and verify the success code in the response""
""Get the resource and verify the success code in the response""
""Navigate to activity page""
""Developer logs in""
""Authenticates api owner""
""authenticates Harley (developer)""
""authenticates Harley (developer)""
""authenticates Harley (developer)""
""authenticates Mark (Access-Manager)""
""authenticates Harley (developer)""
""Delete application""
""Verify that application is deleted""
""Verify that API is not accessible with the generated API Key when the application is deleted""
""authenticates Janis (api owner)""
""authenticates Janis (api owner)""
""authenticates Janis (api owner)""
""authenticates Janis (api owner)""
""authenticates Harley (developer)""
""authenticates Janis (api owner)""
""Authenticates Mark (Access-Manager)""
""Navigate to Consumer Page to see the Approve Request option""
""Verify that the option to approve request is displayed""
""Authenticates Janis (api owner)""
""Authenticates Wendy (Credential-Issuer)""
""Verify that all the namespace options and activities are displayed""
""Authenticates Janis (api owner)""
""Authenticates Wendy (Credential-Issuer)""
""Verify that only Authorization Profile option is displayed in Namespace page""
""Verify that authorization profile for Client ID/Secret is generated""
""Authenticates Janis (api owner)""
""authenticates Mark""
""Navigate to Consumer Page to see the Approve Request option""
""Navigate to Consumer Page to see the Approve Request option""
""Verify that service accounts are not created""
""authenticates Janis (api owner)""
""Authenticates Wendy (Credential-Issuer)""
""Verify that GWA API allows user to publish the API to Kong gateway""
""authenticates Janis (api owner)""
""authenticates Janis (api owner)""
""Prepare the Request Specification for the API""
""Prepare the Request Specification for the API""
""authenticates Janis (api owner) to get the user session token""
""Get the resource and verify the success code in the response""
""Compare the scope values in response against the expected values""
""Get the resource and verify the success code in the response""
""Compare the Namespace values in response against the expected values""
""Delete the namespace associated with the organization, organization unit and verify the success code in the response""
""Verify that the deleted Namespace is not displayed in Get Call""
""Add the access of the organization to the specific user and verify the success code in the response""
""Get the resource and verify the success code in the response""
""Compare the Namespace values in response against the expected values""
""authenticates Janis (api owner) to get the user session token""
""Put the resource and verify the success code in the response""
""Get the resource and verify the success code in the response""
""Compare the values in response against the values passed in the request""
""Verify the status code and response message for invalid slugvalue""
""Delete the documentation""
""Delete the documentation""
""Put the resource and verify the success code in the response""
""Verify that document contant is displayed for GET /documentation""
""Verify the status code and response message for invalid slug id""
""Verify that document contant is fetch by slug ID""
""authenticates Janis (api owner) to get the user session token""
""Put the resource and verify the success code in the response""
""Get the resource and verify the success code in the response""
""Compare the values in response against the values passed in the request""
""Delete the authorization profile""
""Verify that the authorization profile is deleted""
""Put the resource and verify the success code in the response""
""Get the resource and verify the success code in the response""
""Compare the values in response against the values passed in the request""
""Delete the authorization profile""
""Verify that the authorization profile is deleted""
""Put the resource and verify the success code in the response""
""Get the resource and verify the success code in the response""
""Compare the values in response against the values passed in the request""
""Delete the authorization profile""
""Verify that the authorization profile is deleted""
""authenticates Janis (api owner) to get the user session token""
""Put the resource and verify the success code in the response""
""Get the resource and verify the success code and product name in the response""
""Compare the values in response against the values passed in the request""
""authenticates Janis (api owner) to get the user session token""
""Delete the product environment and verify the success code in the response""
""Get the resource and verify that product environment is deleted""
""Delete the product and verify the success code in the response""
""Get the resource and verify that product is deleted""
""authenticates Janis (api owner) to get the user session token""
""Put the resource (/organization/{org}/datasets) and verify the success code in the response""
""Get the resource (/organization/{org}/datasets/{name}) and verify the success code in the response""
""Compare the values in response against the values passed in the request""
""Put the resource (/namespaces/{ns}/datasets/{name}) and verify the success code in the response""
""Get the resource (/namespaces/{ns}/datasets/{name}) and verify the success code in the response""
""Compare the values in response against the values passed in the request""
""Get the resource (/organizations/{org}/datasets/{name}) and verify the success code in the response""
""Compare the values in response against the values passed in the request""
""Get the resource (/organizations/{org}/datasets) and verify the success code in the response""
""Compare the values in response against the values passed in the request""
""Get the directory details (/directory) and verify the success code in the response""
""Get the directory details by its ID (/directory/{id}) and verify the success code in the response""
""Get the namespace directory details (/namespaces/{ns}/directory) and verify the success code and empty response for the namespace with no directory""
""Get the namespace directory details (/namespaces/{ns}/directory) and verify the success code in the response""
""Get the namespace directory details by its ID (/namespaces/{ns}/directory/{id}) and verify the success code in the response""
""Get the namespace directory details (/namespaces/{ns}/directory/{id}) for non exist directory ID and verify the response code""
""Delete the dataset (/organizations/{org}/datasets/{name}) and verify the success code in the response""
""Verify that deleted dataset does not display in Get dataset list""
""authenticates Janis (api owner) to get the user session token""
""Get the resource and verify the success code in the response""
""Verify that the selected Namespace is displayed in the Response list in the response""
""Get the resource and verify the success code in the response""
""Get the resource for namespace summary and verify the success code in the response""
""Delete the namespace and verify the Validation to prevent deleting the namespace""
""Force delete the namespace and verify the success code in the response""
Run Link: https://github.com/bcgov/api-services-portal/actions/runs/3283710064",1,failed automated tests stats suites tests passes pending failures start end duration testsregistered passpercent pendingpercent other hasother false skipped hasskipped true failed tests update the dataset in bc data catelogue to appear the api in the directory publish product to directory create a test environment applies authorization plugin to service published to kong gateway activate the service for test environment activate the service for dev environment grant namespace access to mark access manager grant credentialissuer admin permission to janis api owner collect the credentials close the popup without collecting credentials authenticates mark access manager verify that the request status is pending approval collect the credentials verify that api is not accessible with the generated api key when the request is not approved authenticates mark access manager verify the request details add group labels in request details window approves an access request authenticates mark access manager navigate to consumer page and filter the product click on the first consumer click on grant access button grant access to test environment verify the service is accessible with api key for elevated access verify the service is accessibale with api key for free access verify the service is accessible with api key for elevated access authenticates mark access manager navigate to consumer page and filter the product select the consumer from the list set ip address that is not accessible in the network as allowed ip and set route as scope verify ip restriction error when the api calls other than the allowed ip set ip address that is not accessible in the network as allowed ip and set service as scope verify ip restriction error when the api calls other than the allowed ip set ip address that is accessible in the network as allowed ip and set route as scope verify the success stats when the api calls within the allowed ip range set ip address that is accessible in the network as allowed ip and set service as scope verify the success stats when the api calls within the allowed ip range navigate to consumer page and filter the product set api ip restriction to global service level verify that ip restriction is set at global service level set ip address that is not accessible in the network as allowed ip and set service as scope verify ip restriction error when the api calls other than the allowed ip navigate to consumer page and filter the product set api ip restriction to global service level verify that ip restriction is set at global service level set ip address that is not accessible in the network as allowed ip and set service as scope verify ip restriction error when the api calls other than the allowed ip authenticates mark access manager navigate to consumer page and filter the product select the consumer from the list set api rate limit as per the test config local policy and scope as service verify rate limit error when the api calls beyond the limit set api rate limit as per the test config local policy and scope as route verify rate limit error when the api calls beyond the limit set api rate limit as per the test config redis policy and scope as service verify rate limit error when the api calls beyond the limit set api rate limit as per the test config redis policy and scope as route verify rate limit error when the api calls beyond the limit set api rate limit to global service level verify that rate limiting is set at global service level set api rate limit as per the test config redis policy and scope as service verify rate limit error when the api calls beyond the limit set api rate limit to global service level verify that rate limiting is set at global service level set api rate limit as per the test config redis policy and scope as service verify rate limit error when the api calls beyond the limit creates an access request authenticates mark access manager verify the request details add group labels in request details window approves an access request verify that api is accessible with the generated api key authenticates mark access manager verify that consumers are filters as per given parameter authenticates mark access manager navigate to consumer page and filter the product click on the first consumer verify that labels can be deleted verify that labels can be updated verify that labels can be added grant namespace access to access manager mark grant credentialissuer admin permission to credential issuer wendy select the namespace created for client credential creates authorization profile for client id secret creates authorization profile for jwt generated key pair creates authorization profile for jwks url creates invalid authorization profile update the dataset in bc data catalogue to appear the api in the directory adds environment with client id secret authenticator to product adds environment with jwt generated key pair authenticator to product adds environment with jwt jwks url authenticator to product applies authorization plugin to service published to kong gateway activate the service for test environment adds environment for invalid authorization profile to other creates an access request access manager logs in select scopes in authorization tab approves an access request get access token using client id and secret make api request creates an access request access manager logs in approves an access request get access token using jwt key pair make api request creates an access request access manager logs in approves an access request get access token using jwt key pair make api request get current api key regenrate credential verify that new api key is set to the consumer verify that only one api key new key is set to the consumer in kong gateway regenrate credential client id and secret make sure that the old client id and secret is disabled update the dataset in bc data catelogue to appear the api in the directory publish product to directory applies authorization plugin to service published to kong gateway delete product environment delete the product authenticates janis api owner to get the user session token get the resource and verify the success code in the response get the resource and verify the success code in the response navigate to activity page developer logs in authenticates api owner authenticates harley developer authenticates harley developer authenticates harley developer authenticates mark access manager authenticates harley developer delete application verify that application is deleted verify that api is not accessible with the generated api key when the application is deleted authenticates janis api owner authenticates janis api owner authenticates janis api owner authenticates janis api owner authenticates harley developer authenticates janis api owner authenticates mark access manager navigate to consumer page to see the approve request option verify that the option to approve request is displayed authenticates janis api owner authenticates wendy credential issuer verify that all the namespace options and activities are displayed authenticates janis api owner authenticates wendy credential issuer verify that only authorization profile option is displayed in namespace page verify that authorization profile for client id secret is generated authenticates janis api owner authenticates mark navigate to consumer page to see the approve request option navigate to consumer page to see the approve request option verify that service accounts are not created authenticates janis api owner authenticates wendy credential issuer verify that gwa api allows user to publish the api to kong gateway authenticates janis api owner authenticates janis api owner prepare the request specification for the api prepare the request specification for the api authenticates janis api owner to get the user session token get the resource and verify the success code in the response compare the scope values in response against the expected values get the resource and verify the success code in the response compare the namespace values in response against the expected values delete the namespace associated with the organization organization unit and verify the success code in the response verify that the deleted namespace is not displayed in get call add the access of the organization to the specific user and verify the success code in the response get the resource and verify the success code in the response compare the namespace values in response against the expected values authenticates janis api owner to get the user session token put the resource and verify the success code in the response get the resource and verify the success code in the response compare the values in response against the values passed in the request verify the status code and response message for invalid slugvalue delete the documentation delete the documentation put the resource and verify the success code in the response verify that document contant is displayed for get documentation verify the status code and response message for invalid slug id verify that document contant is fetch by slug id authenticates janis api owner to get the user session token put the resource and verify the success code in the response get the resource and verify the success code in the response compare the values in response against the values passed in the request delete the authorization profile verify that the authorization profile is deleted put the resource and verify the success code in the response get the resource and verify the success code in the response compare the values in response against the values passed in the request delete the authorization profile verify that the authorization profile is deleted put the resource and verify the success code in the response get the resource and verify the success code in the response compare the values in response against the values passed in the request delete the authorization profile verify that the authorization profile is deleted authenticates janis api owner to get the user session token put the resource and verify the success code in the response get the resource and verify the success code and product name in the response compare the values in response against the values passed in the request authenticates janis api owner to get the user session token delete the product environment and verify the success code in the response get the resource and verify that product environment is deleted delete the product and verify the success code in the response get the resource and verify that product is deleted authenticates janis api owner to get the user session token put the resource organization org datasets and verify the success code in the response get the resource organization org datasets name and verify the success code in the response compare the values in response against the values passed in the request put the resource namespaces ns datasets name and verify the success code in the response get the resource namespaces ns datasets name and verify the success code in the response compare the values in response against the values passed in the request get the resource organizations org datasets name and verify the success code in the response compare the values in response against the values passed in the request get the resource organizations org datasets and verify the success code in the response compare the values in response against the values passed in the request get the directory details directory and verify the success code in the response get the directory details by its id directory id and verify the success code in the response get the namespace directory details namespaces ns directory and verify the success code and empty response for the namespace with no directory get the namespace directory details namespaces ns directory and verify the success code in the response get the namespace directory details by its id namespaces ns directory id and verify the success code in the response get the namespace directory details namespaces ns directory id for non exist directory id and verify the response code delete the dataset organizations org datasets name and verify the success code in the response verify that deleted dataset does not display in get dataset list authenticates janis api owner to get the user session token get the resource and verify the success code in the response verify that the selected namespace is displayed in the response list in the response get the resource and verify the success code in the response get the resource for namespace summary and verify the success code in the response delete the namespace and verify the validation to prevent deleting the namespace force delete the namespace and verify the success code in the response run link ,1
6274,22659497446.0,IssuesEvent,2022-07-02 00:41:49,wilkins88/ApexLibs,https://api.github.com/repos/wilkins88/ApexLibs,opened,Allow ordering on SObject Setting in triggers,enhancement Automation,"AC:
- Similar to how ordering can be applied to handlers, allow ordering on the sobject setting level to control ordering across packages",1.0,"Allow ordering on SObject Setting in triggers - AC:
- Similar to how ordering can be applied to handlers, allow ordering on the sobject setting level to control ordering across packages",1,allow ordering on sobject setting in triggers ac similar to how ordering can be applied to handlers allow ordering on the sobject setting level to control ordering across packages,1
203,4675950260.0,IssuesEvent,2016-10-07 09:51:43,cf-tm-bot/openstack_cpi,https://api.github.com/repos/cf-tm-bot/openstack_cpi,closed, extract lifecycle terraform from current test-terraform - Story Id: 127445791,chore ci env-creation-automation started,"currently, we have just one terraform script for bats and lifecycles. extract the lifecycle part, so we can run it separately.
---
Mirrors: [story 127445791](https://www.pivotaltracker.com/story/show/127445791) submitted on Aug 1, 2016 UTC
- **Requester**: Marco Voelz
- **Owners**: Felix Riegger, Mauro Morales
- **Estimate**: 0.0",1.0," extract lifecycle terraform from current test-terraform - Story Id: 127445791 - currently, we have just one terraform script for bats and lifecycles. extract the lifecycle part, so we can run it separately.
---
Mirrors: [story 127445791](https://www.pivotaltracker.com/story/show/127445791) submitted on Aug 1, 2016 UTC
- **Requester**: Marco Voelz
- **Owners**: Felix Riegger, Mauro Morales
- **Estimate**: 0.0",1, extract lifecycle terraform from current test terraform story id currently we have just one terraform script for bats and lifecycles extract the lifecycle part so we can run it separately mirrors submitted on aug utc requester marco voelz owners felix riegger mauro morales estimate ,1
3440,13766274373.0,IssuesEvent,2020-10-07 14:23:02,elastic/beats,https://api.github.com/repos/elastic/beats,closed,[CI] Run Journalbeat compatibility tests,Journalbeat Team:Automation [zube]: Inbox automation ci enhancement,"We need to test Journalbeat with different Linux distributions and different systems versions.
* CentOS/RHEL 6.5+/7.x (64 bits)
* CentOS/RHEL 8 (64 bits)
* Ubuntu 14.04 (32 bits)
* Ubuntu 14.04 (64 bits)
* Ubuntu 16.04 (64 bits)
* Ubuntu 18.04 (64 bits)
* Ubuntu 20.04 (64 bits)
* Debian 8 (64 bits)
* Debian 9 (64 bits)
* Debian 10 (64 bits)
We need a make/mage target to run the proper test because there are some differences between systems versions, those test would be different thus we will have to select the kind of test by passing parameters/env vars/whatever",2.0,"[CI] Run Journalbeat compatibility tests - We need to test Journalbeat with different Linux distributions and different systems versions.
* CentOS/RHEL 6.5+/7.x (64 bits)
* CentOS/RHEL 8 (64 bits)
* Ubuntu 14.04 (32 bits)
* Ubuntu 14.04 (64 bits)
* Ubuntu 16.04 (64 bits)
* Ubuntu 18.04 (64 bits)
* Ubuntu 20.04 (64 bits)
* Debian 8 (64 bits)
* Debian 9 (64 bits)
* Debian 10 (64 bits)
We need a make/mage target to run the proper test because there are some differences between systems versions, those test would be different thus we will have to select the kind of test by passing parameters/env vars/whatever",1, run journalbeat compatibility tests we need to test journalbeat with different linux distributions and different systems versions centos rhel x bits centos rhel bits ubuntu bits ubuntu bits ubuntu bits ubuntu bits ubuntu bits debian bits debian bits debian bits we need a make mage target to run the proper test because there are some differences between systems versions those test would be different thus we will have to select the kind of test by passing parameters env vars whatever,1
167392,13024032732.0,IssuesEvent,2020-07-27 11:05:20,hibernate/hibernate-reactive,https://api.github.com/repos/hibernate/hibernate-reactive,opened,Update tests after upgrade to vert.x sql client 3.9.2,testing,Now that the [client is updated](https://github.com/hibernate/hibernate-reactive/pull/293) some of the types that weren't working for DB2 will work.,1.0,Update tests after upgrade to vert.x sql client 3.9.2 - Now that the [client is updated](https://github.com/hibernate/hibernate-reactive/pull/293) some of the types that weren't working for DB2 will work.,0,update tests after upgrade to vert x sql client now that the some of the types that weren t working for will work ,0
17901,12685561805.0,IssuesEvent,2020-06-20 05:18:56,microsoft/TypeScript,https://api.github.com/repos/microsoft/TypeScript,closed,Unable to publish due to baseline difference in `.d.ts` emit,High Priority Infrastructure,"We seem to be hitting some sort of issue with parenthesization on `.d.ts` files. This has been blocking nightly publishes, and will block any sort of beta publish next week.
https://typescript.visualstudio.com/TypeScript/_build/results?buildId=76918&view=logs&j=fd490c07-0b22-5182-fac9-6d67fe1e939b&t=00933dce-c782-5c03-4a85-76379ccfa50a&l=139

",1.0,"Unable to publish due to baseline difference in `.d.ts` emit - We seem to be hitting some sort of issue with parenthesization on `.d.ts` files. This has been blocking nightly publishes, and will block any sort of beta publish next week.
https://typescript.visualstudio.com/TypeScript/_build/results?buildId=76918&view=logs&j=fd490c07-0b22-5182-fac9-6d67fe1e939b&t=00933dce-c782-5c03-4a85-76379ccfa50a&l=139

",0,unable to publish due to baseline difference in d ts emit we seem to be hitting some sort of issue with parenthesization on d ts files this has been blocking nightly publishes and will block any sort of beta publish next week ,0
539238,15785747731.0,IssuesEvent,2021-04-01 16:45:35,wso2/product-apim,https://api.github.com/repos/wso2/product-apim,closed,Remove unused analytics configuration UI from devportal and publisher,API-M 4.0.0 Priority/Normal REST APIs React-UI Type/Bug,"### Description:
### Steps to reproduce:

and

should be removed along with their respective components
### Affected Product Version:
### Environment details (with versions):
- OS:
- Client:
- Env (Docker/K8s):
---
### Optional Fields
#### Related Issues:
#### Suggested Labels:
#### Suggested Assignees:
",1.0,"Remove unused analytics configuration UI from devportal and publisher - ### Description:
### Steps to reproduce:

and

should be removed along with their respective components
### Affected Product Version:
### Environment details (with versions):
- OS:
- Client:
- Env (Docker/K8s):
---
### Optional Fields
#### Related Issues:
#### Suggested Labels:
#### Suggested Assignees:
",0,remove unused analytics configuration ui from devportal and publisher description steps to reproduce and should be removed along with their respective components affected product version environment details with versions os client env docker optional fields related issues suggested labels suggested assignees ,0
361,5718327206.0,IssuesEvent,2017-04-19 19:16:36,DevExpress/testcafe,https://api.github.com/repos/DevExpress/testcafe,closed,"A call of pressKey(""enter"") doesn't raise the ""click"" event on a button element",AREA: client SYSTEM: automations TYPE: bug,"### Are you requesting a feature or reporting a bug?
bug
### What is the current behavior?
A call of pressKey(""enter"") doesn't raise the ""click"" event on a button element
### What is the expected behavior?
Enter key press should raise the ""click"" event on a button element.
#### Provide the test code and the tested page URL (if applicable)
Test code
```js
test(""tester"", async t => {
await ClientFunction(() => {
el().focus();
}, {
dependencies: {
el: Selector(""#test"")
}
})();
await t
.wait(2000)
.pressKey(""enter"")
.wait(2000);
});
```
```html
```
### Specify your
* operating system:
* testcafe version:
* node.js version:",1.0,"A call of pressKey(""enter"") doesn't raise the ""click"" event on a button element - ### Are you requesting a feature or reporting a bug?
bug
### What is the current behavior?
A call of pressKey(""enter"") doesn't raise the ""click"" event on a button element
### What is the expected behavior?
Enter key press should raise the ""click"" event on a button element.
#### Provide the test code and the tested page URL (if applicable)
Test code
```js
test(""tester"", async t => {
await ClientFunction(() => {
el().focus();
}, {
dependencies: {
el: Selector(""#test"")
}
})();
await t
.wait(2000)
.pressKey(""enter"")
.wait(2000);
});
```
```html
```
### Specify your
* operating system:
* testcafe version:
* node.js version:",1,a call of presskey enter doesn t raise the click event on a button element are you requesting a feature or reporting a bug bug what is the current behavior a call of presskey enter doesn t raise the click event on a button element what is the expected behavior enter key press should raise the click event on a button element provide the test code and the tested page url if applicable test code js test tester async t await clientfunction el focus dependencies el selector test await t wait presskey enter wait html click me document getelementbyid test addeventlistener click function document getelementbyid res innerhtml clicked document getelementbyid test addeventlistener keypress function document getelementbyid res innerhtml keypress specify your operating system testcafe version node js version ,1
4095,15397879968.0,IssuesEvent,2021-03-03 22:55:10,MinaProtocol/mina,https://api.github.com/repos/MinaProtocol/mina,closed,Integration Test Core: Improved GraphQL Port Management,acceptance-automation,"In the current implementation of the integration test framework, whenever we need to query the GraphQL port of a daemon we have deployed, we begin a port-forwarding that pod's GraphQL port to a local port on the host machine running the integration test executive. This is done in an on-demand fashion, where we begin port forwarding as soon as we need to send the first GraphQL request to each node.
https://github.com/MinaProtocol/mina/blob/develop/src/lib/integration_test_cloud_engine/kubernetes_network.ml#L78
As we begin to scale the integration test framework to launch larger networks, and begin to utilize the GraphQL queries more and more, we need to ensure that this system for managing GraphQL ports is responsive enough to not cause large delays in the testing when we need to broadcast a GraphQL query out to a somewhat large set of nodes running on the network. In my personal testing (and this should probably be confirmed by someone else as well), running `kubectl port-forward ...` pauses and takes a little bit of time (seconds) to startup. Doing this on-demand for N nodes all at once would cause a somewhat lengthy delay in the integration test framework.
First, we should measure the performance of this system (eg, run a network with 50 nodes and see how long it takes to setup each of those port forwarding commands). From that, we should have a meeting to discuss how to approach this problem. There are likely a few ways to alleviate this, including setting up port-forwarding for all nodes at the start of the test rather than on-demand, or using kubectl to tunnel into pods when sending queries rather than exposing ports out of those pods to our local machine.
## Post Meeting
@0x0I, Helena, and myself met to discuss how we want to manage this moving forward. We landed on the following solution:
Setup a single, static ingress for all the integration tests. Deploy without setting explicit GraphQL ports, allowing kubernetes to dynamically assign random, free ports to each service. Once the deploy is done, before we start the test from the test_executive, query all of the ports that were assigned. Setup path-based routing in the ingress to each of these services and their respective GraphQL ports. Use this ingress as the single entrypoint to talk to all the GraphQL instances runnning on our nodes.",1.0,"Integration Test Core: Improved GraphQL Port Management - In the current implementation of the integration test framework, whenever we need to query the GraphQL port of a daemon we have deployed, we begin a port-forwarding that pod's GraphQL port to a local port on the host machine running the integration test executive. This is done in an on-demand fashion, where we begin port forwarding as soon as we need to send the first GraphQL request to each node.
https://github.com/MinaProtocol/mina/blob/develop/src/lib/integration_test_cloud_engine/kubernetes_network.ml#L78
As we begin to scale the integration test framework to launch larger networks, and begin to utilize the GraphQL queries more and more, we need to ensure that this system for managing GraphQL ports is responsive enough to not cause large delays in the testing when we need to broadcast a GraphQL query out to a somewhat large set of nodes running on the network. In my personal testing (and this should probably be confirmed by someone else as well), running `kubectl port-forward ...` pauses and takes a little bit of time (seconds) to startup. Doing this on-demand for N nodes all at once would cause a somewhat lengthy delay in the integration test framework.
First, we should measure the performance of this system (eg, run a network with 50 nodes and see how long it takes to setup each of those port forwarding commands). From that, we should have a meeting to discuss how to approach this problem. There are likely a few ways to alleviate this, including setting up port-forwarding for all nodes at the start of the test rather than on-demand, or using kubectl to tunnel into pods when sending queries rather than exposing ports out of those pods to our local machine.
## Post Meeting
@0x0I, Helena, and myself met to discuss how we want to manage this moving forward. We landed on the following solution:
Setup a single, static ingress for all the integration tests. Deploy without setting explicit GraphQL ports, allowing kubernetes to dynamically assign random, free ports to each service. Once the deploy is done, before we start the test from the test_executive, query all of the ports that were assigned. Setup path-based routing in the ingress to each of these services and their respective GraphQL ports. Use this ingress as the single entrypoint to talk to all the GraphQL instances runnning on our nodes.",1,integration test core improved graphql port management in the current implementation of the integration test framework whenever we need to query the graphql port of a daemon we have deployed we begin a port forwarding that pod s graphql port to a local port on the host machine running the integration test executive this is done in an on demand fashion where we begin port forwarding as soon as we need to send the first graphql request to each node as we begin to scale the integration test framework to launch larger networks and begin to utilize the graphql queries more and more we need to ensure that this system for managing graphql ports is responsive enough to not cause large delays in the testing when we need to broadcast a graphql query out to a somewhat large set of nodes running on the network in my personal testing and this should probably be confirmed by someone else as well running kubectl port forward pauses and takes a little bit of time seconds to startup doing this on demand for n nodes all at once would cause a somewhat lengthy delay in the integration test framework first we should measure the performance of this system eg run a network with nodes and see how long it takes to setup each of those port forwarding commands from that we should have a meeting to discuss how to approach this problem there are likely a few ways to alleviate this including setting up port forwarding for all nodes at the start of the test rather than on demand or using kubectl to tunnel into pods when sending queries rather than exposing ports out of those pods to our local machine post meeting helena and myself met to discuss how we want to manage this moving forward we landed on the following solution setup a single static ingress for all the integration tests deploy without setting explicit graphql ports allowing kubernetes to dynamically assign random free ports to each service once the deploy is done before we start the test from the test executive query all of the ports that were assigned setup path based routing in the ingress to each of these services and their respective graphql ports use this ingress as the single entrypoint to talk to all the graphql instances runnning on our nodes ,1
107565,4310914557.0,IssuesEvent,2016-07-21 20:49:28,Toolwatchapp/tw-mobile,https://api.github.com/repos/Toolwatchapp/tw-mobile,closed,Wording typo fix ,effort: 1 (easy) priority: 3 (nice to have) type:enhancement,"Doesn't seem I could find that one in the i18n file :(
Please change deleting a watch confirm box ""Watch suppression"" to ""Delete watch"".
Thanks",1.0,"Wording typo fix - Doesn't seem I could find that one in the i18n file :(
Please change deleting a watch confirm box ""Watch suppression"" to ""Delete watch"".
Thanks",0,wording typo fix doesn t seem i could find that one in the file please change deleting a watch confirm box watch suppression to delete watch thanks,0
67777,9099937488.0,IssuesEvent,2019-02-20 06:50:47,poliastro/poliastro,https://api.github.com/repos/poliastro/poliastro,closed,Documentation CSS messed up by Jupyter stuff,bug documentation upstream,"Compare:
https://docs.poliastro.space/en/latest/
with:
https://docs.poliastro.space/en/stable/
And the culprit seems to be some inline CSS. It can confirmed by adding this line to `/etc/hosts/`:
```
127.0.0.1 unpkg.com
```",1.0,"Documentation CSS messed up by Jupyter stuff - Compare:
https://docs.poliastro.space/en/latest/
with:
https://docs.poliastro.space/en/stable/
And the culprit seems to be some inline CSS. It can confirmed by adding this line to `/etc/hosts/`:
```
127.0.0.1 unpkg.com
```",0,documentation css messed up by jupyter stuff compare with and the culprit seems to be some inline css it can confirmed by adding this line to etc hosts unpkg com ,0
270680,20605338247.0,IssuesEvent,2022-03-06 22:05:44,bounswe/bounswe2022group5,https://api.github.com/repos/bounswe/bounswe2022group5,closed,Creating wiki page for the research about favourite repos,Type: Documentation Type: Research,"#### Description:
A wiki page for displaying the collective result of our research about favourite github repositories mentioned in Assignment1 needs to be created. The research results should include name of the repo, link to the repo and a description.
#### To Do:
* Create a wiki page for research results
* Add a research result template as example",1.0,"Creating wiki page for the research about favourite repos - #### Description:
A wiki page for displaying the collective result of our research about favourite github repositories mentioned in Assignment1 needs to be created. The research results should include name of the repo, link to the repo and a description.
#### To Do:
* Create a wiki page for research results
* Add a research result template as example",0,creating wiki page for the research about favourite repos description a wiki page for displaying the collective result of our research about favourite github repositories mentioned in needs to be created the research results should include name of the repo link to the repo and a description to do create a wiki page for research results add a research result template as example,0
69930,15043648745.0,IssuesEvent,2021-02-03 01:10:51,yaeljacobs67/proxysql,https://api.github.com/repos/yaeljacobs67/proxysql,opened,CVE-2019-19645 (Medium) detected in wazuhv3.3.1,security vulnerability,"## CVE-2019-19645 - Medium Severity Vulnerability
Vulnerable Library - wazuhv3.3.1
alter.c in SQLite through 3.30.1 allows attackers to trigger infinite recursion via certain types of self-referential views in conjunction with ALTER TABLE statements.
alter.c in SQLite through 3.30.1 allows attackers to trigger infinite recursion via certain types of self-referential views in conjunction with ALTER TABLE statements.
",0,cve medium detected in cve medium severity vulnerability vulnerable library wazuh the open source security platform library home page a href vulnerable source files proxysql deps sqlite amalgamation tar sqlite amalgamation c proxysql deps sqlite amalgamation tar sqlite amalgamation c vulnerability details alter c in sqlite through allows attackers to trigger infinite recursion via certain types of self referential views in conjunction with alter table statements publish date url a href cvss score details base score metrics exploitability metrics attack vector local attack complexity low privileges required low user interaction none scope unchanged impact metrics confidentiality impact none integrity impact none availability impact high for more information on scores click a href suggested fix type upgrade version origin a href release date fix resolution ,0
156991,13669084718.0,IssuesEvent,2020-09-29 00:54:02,UnBArqDsw/2020.1_G7_TCM,https://api.github.com/repos/UnBArqDsw/2020.1_G7_TCM,closed,Diagrama de colaboração/comunicação,documentation,Elaborar o diagrama de colaboração/comunicação para a aplicação.,1.0,Diagrama de colaboração/comunicação - Elaborar o diagrama de colaboração/comunicação para a aplicação.,0,diagrama de colaboração comunicação elaborar o diagrama de colaboração comunicação para a aplicação ,0
4236,15855685721.0,IssuesEvent,2021-04-08 00:28:33,aws/aws-cli,https://api.github.com/repos/aws/aws-cli,closed,completer doesn't work with file redirection in bash,autocomplete automation-exempt bug,"`aws ec2 describe-instances > `
doesn't let me choose a file.
",1.0,"completer doesn't work with file redirection in bash - `aws ec2 describe-instances > `
doesn't let me choose a file.
",1,completer doesn t work with file redirection in bash aws describe instances doesn t let me choose a file ,1
40688,12799618608.0,IssuesEvent,2020-07-02 15:40:34,TreyM-WSS/concord,https://api.github.com/repos/TreyM-WSS/concord,opened,CVE-2020-7608 (Medium) detected in yargs-parser-11.1.1.tgz,security vulnerability,"## CVE-2020-7608 - Medium Severity Vulnerability
Vulnerable Library - yargs-parser-11.1.1.tgz
",0,cve medium detected in yargs parser tgz cve medium severity vulnerability vulnerable library yargs parser tgz the mighty option parser used by yargs library home page a href path to dependency file tmp ws scm concord package json path to vulnerable library tmp ws scm concord node modules webpack dev server node modules yargs parser package json dependency hierarchy react scripts tgz root library webpack dev server tgz yargs tgz x yargs parser tgz vulnerable library found in head commit a href vulnerability details yargs parser could be tricked into adding or modifying properties of object prototype using a proto payload publish date url a href cvss score details base score metrics exploitability metrics attack vector local attack complexity low privileges required low user interaction none scope unchanged impact metrics confidentiality impact low integrity impact low availability impact low for more information on scores click a href suggested fix type upgrade version origin a href release date fix resolution isopenpronvulnerability true ispackagebased true isdefaultbranch true packages vulnerabilityidentifier cve vulnerabilitydetails yargs parser could be tricked into adding or modifying properties of object prototype using a proto payload vulnerabilityurl ,0
6354,22843857719.0,IssuesEvent,2022-07-13 02:30:32,keycloak/keycloak-benchmark,https://api.github.com/repos/keycloak/keycloak-benchmark,closed,add a keycloak benchmark ci runner shell script to run simulations on a remote ci agent,automation,"### Description
add a keycloak benchmark ci runner shell script to run simulations on a remote ci agent
### Discussion
_No response_
### Motivation
To be able to scale the number of runs that can be executed using the existing framework
### Details
This particular script would be designed with a specific Jenkins CI system in mind, however, similar approach can be adopted to any other CI systems around.",1.0,"add a keycloak benchmark ci runner shell script to run simulations on a remote ci agent - ### Description
add a keycloak benchmark ci runner shell script to run simulations on a remote ci agent
### Discussion
_No response_
### Motivation
To be able to scale the number of runs that can be executed using the existing framework
### Details
This particular script would be designed with a specific Jenkins CI system in mind, however, similar approach can be adopted to any other CI systems around.",1,add a keycloak benchmark ci runner shell script to run simulations on a remote ci agent description add a keycloak benchmark ci runner shell script to run simulations on a remote ci agent discussion no response motivation to be able to scale the number of runs that can be executed using the existing framework details this particular script would be designed with a specific jenkins ci system in mind however similar approach can be adopted to any other ci systems around ,1
131641,5163549418.0,IssuesEvent,2017-01-17 07:17:32,VirtoCommerce/vc-platform,https://api.github.com/repos/VirtoCommerce/vc-platform,closed,Storefront fix bug when Contact.Id and User.Id can be different,bug high priority,"Need use User.MemberId to link security account with CRM contact
",1.0,"Storefront fix bug when Contact.Id and User.Id can be different - Need use User.MemberId to link security account with CRM contact
",0,storefront fix bug when contact id and user id can be different need use user memberid to link security account with crm contact ,0
3766,14531472321.0,IssuesEvent,2020-12-14 20:50:41,BCDevOps/OpenShift4-RollOut,https://api.github.com/repos/BCDevOps/OpenShift4-RollOut,opened,OCP GOLD - Configure the new GOLD cluster,team/DXC tech/automation,"**Describe the issue**
After bootstrapping all nodes is complete, final configuration will need to be applied before the cluster is in a working state.
**Which Sprint Goal is this issue related to?**
**Additional context**
Involved playbook is found here:
**Definition of done Checklist (where applicable)**
- [ ] Run playbooks/config-api.yaml - confirm web console and oc still work.
- [ ] Run playbooks/config-everything.yaml. Confirm overall cluster functionality.",1.0,"OCP GOLD - Configure the new GOLD cluster - **Describe the issue**
After bootstrapping all nodes is complete, final configuration will need to be applied before the cluster is in a working state.
**Which Sprint Goal is this issue related to?**
**Additional context**
Involved playbook is found here:
**Definition of done Checklist (where applicable)**
- [ ] Run playbooks/config-api.yaml - confirm web console and oc still work.
- [ ] Run playbooks/config-everything.yaml. Confirm overall cluster functionality.",1,ocp gold configure the new gold cluster describe the issue after bootstrapping all nodes is complete final configuration will need to be applied before the cluster is in a working state which sprint goal is this issue related to additional context involved playbook is found here definition of done checklist where applicable run playbooks config api yaml confirm web console and oc still work run playbooks config everything yaml confirm overall cluster functionality ,1
4689,17243962693.0,IssuesEvent,2021-07-21 05:28:52,pc2ccs/pc2v9,https://api.github.com/repos/pc2ccs/pc2v9,opened,Update team numbers in pc2 from a CLICS event feed,automation enhancement,"**Is your feature request related to a problem?**
More of an inefficiency
**Feature Description**:
Update team numbers based on team numbers in a CLICS event feed.
The CMS Team Id (aka external team id) will be used to match
the team account, then the team number can be updated/assignged.
**Have you considered other ways to accomplish the same thing?**
Yes, there is no workaround.
**Do you have any specific suggestions for how your feature would be ***implemented*** in PC^2?** If so,
Create a command line tool that will update an existing teams.tsv
and update the team numbers.
Another option is to add a UI feature that will read the event feed and
update the team numbers.
**Additional context**:
Note that connection information can be read from CDP/config files or
from the ContestInformation class.",1.0,"Update team numbers in pc2 from a CLICS event feed - **Is your feature request related to a problem?**
More of an inefficiency
**Feature Description**:
Update team numbers based on team numbers in a CLICS event feed.
The CMS Team Id (aka external team id) will be used to match
the team account, then the team number can be updated/assignged.
**Have you considered other ways to accomplish the same thing?**
Yes, there is no workaround.
**Do you have any specific suggestions for how your feature would be ***implemented*** in PC^2?** If so,
Create a command line tool that will update an existing teams.tsv
and update the team numbers.
Another option is to add a UI feature that will read the event feed and
update the team numbers.
**Additional context**:
Note that connection information can be read from CDP/config files or
from the ContestInformation class.",1,update team numbers in from a clics event feed is your feature request related to a problem more of an inefficiency feature description update team numbers based on team numbers in a clics event feed the cms team id aka external team id will be used to match the team account then the team number can be updated assignged have you considered other ways to accomplish the same thing yes there is no workaround do you have any specific suggestions for how your feature would be implemented in pc if so create a command line tool that will update an existing teams tsv and update the team numbers another option is to add a ui feature that will read the event feed and update the team numbers additional context note that connection information can be read from cdp config files or from the contestinformation class ,1
9261,27818868531.0,IssuesEvent,2023-03-19 01:13:59,uceumice/remix-kawaii,https://api.github.com/repos/uceumice/remix-kawaii,opened,"Issue Title
[actions] Automate publishing to **npm** registry...",actions automation registry,"
Right now every push to the registry requires manual hastle from my side of things. It would be really nice if the process of publishing and versioning would be release based.
For example, on new release tag of form: `v*` a `publish.yaml` workflow would build all packages and publish them with a version of `v[tag]` to the npm registry.
Dunno how big it is of a deal to implement, gonna look up on `remix-utils` + a monorepo organized approach for inspirations.",1.0,"Issue Title
[actions] Automate publishing to **npm** registry... -
Right now every push to the registry requires manual hastle from my side of things. It would be really nice if the process of publishing and versioning would be release based.
For example, on new release tag of form: `v*` a `publish.yaml` workflow would build all packages and publish them with a version of `v[tag]` to the npm registry.
Dunno how big it is of a deal to implement, gonna look up on `remix-utils` + a monorepo organized approach for inspirations.",1,issue title automate publishing to npm registry right now every push to the registry requires manual hastle from my side of things it would be really nice if the process of publishing and versioning would be release based for example on new release tag of form v a publish yaml workflow would build all packages and publish them with a version of v to the npm registry dunno how big it is of a deal to implement gonna look up on remix utils a monorepo organized approach for inspirations ,1
5432,19590473978.0,IssuesEvent,2022-01-05 12:23:03,Shopify/toxiproxy,https://api.github.com/repos/Shopify/toxiproxy,closed,Suggestion: please consider signing release tags,ideas automation,"Thank you for your work on Toxiproxy! In addition to signing commits, it would be very helpful if you would consider [signing release tags](https://docs.github.com/en/authentication/managing-commit-signature-verification/signing-tags), to facilitate easy commandline verification of releases with `git tag -v`.
Currently:
```
$ git verify-commit c6c22ff9f2d40dd1c9db2ca7c4e7ba5162a42743
gpg: Signature made Sun Oct 17 12:22:39 2021 EDT
gpg: using DSA key 93189009CE638E5BBFAF0DC0ACD0D4390D132705
gpg: Good signature from ""Michael Nikitochkin (miry) "" [unknown]
gpg: WARNING: This key is not certified with a trusted signature!
gpg: There is no indication that the signature belongs to the owner.
Primary key fingerprint: 9318 9009 CE63 8E5B BFAF 0DC0 ACD0 D439 0D13 2705
$ git tag -v v2.2.0
error: v2.2.0: cannot verify a non-tag object of type commit.
```
This change would be nice for automated workflows, so that it's easy to grab the latest release tag from the Github API and verify the signature on the tag object before installing.",1.0,"Suggestion: please consider signing release tags - Thank you for your work on Toxiproxy! In addition to signing commits, it would be very helpful if you would consider [signing release tags](https://docs.github.com/en/authentication/managing-commit-signature-verification/signing-tags), to facilitate easy commandline verification of releases with `git tag -v`.
Currently:
```
$ git verify-commit c6c22ff9f2d40dd1c9db2ca7c4e7ba5162a42743
gpg: Signature made Sun Oct 17 12:22:39 2021 EDT
gpg: using DSA key 93189009CE638E5BBFAF0DC0ACD0D4390D132705
gpg: Good signature from ""Michael Nikitochkin (miry) "" [unknown]
gpg: WARNING: This key is not certified with a trusted signature!
gpg: There is no indication that the signature belongs to the owner.
Primary key fingerprint: 9318 9009 CE63 8E5B BFAF 0DC0 ACD0 D439 0D13 2705
$ git tag -v v2.2.0
error: v2.2.0: cannot verify a non-tag object of type commit.
```
This change would be nice for automated workflows, so that it's easy to grab the latest release tag from the Github API and verify the signature on the tag object before installing.",1,suggestion please consider signing release tags thank you for your work on toxiproxy in addition to signing commits it would be very helpful if you would consider to facilitate easy commandline verification of releases with git tag v currently git verify commit gpg signature made sun oct edt gpg using dsa key gpg good signature from michael nikitochkin miry gpg warning this key is not certified with a trusted signature gpg there is no indication that the signature belongs to the owner primary key fingerprint bfaf git tag v error cannot verify a non tag object of type commit this change would be nice for automated workflows so that it s easy to grab the latest release tag from the github api and verify the signature on the tag object before installing ,1
3255,13249939537.0,IssuesEvent,2020-08-19 21:47:17,ThinkingEngine-net/PickleTestSuite,https://api.github.com/repos/ThinkingEngine-net/PickleTestSuite,closed,Behat states a feature file may include tags at scenario level,Browser Automation bug,"
Behat states a feature file may include tags at scenario level and at feature level. When you run execute it only picks up tags at scenario level. In the following example, it picks up the @PL-234 tag, but not the @smoke, @setup, or @javascript tag set at the feature level:
**@smoke @setup @javascript
Feature: 00 Smoke test site setup
Background: Given ""admin"" login with profile settings
@PL-234
Scenario: PL-234 Check you are on the correct version of Totara When I go direct to ""/admin/index.php""
Then I am on the expected version number**
",1.0,"Behat states a feature file may include tags at scenario level -
Behat states a feature file may include tags at scenario level and at feature level. When you run execute it only picks up tags at scenario level. In the following example, it picks up the @PL-234 tag, but not the @smoke, @setup, or @javascript tag set at the feature level:
**@smoke @setup @javascript
Feature: 00 Smoke test site setup
Background: Given ""admin"" login with profile settings
@PL-234
Scenario: PL-234 Check you are on the correct version of Totara When I go direct to ""/admin/index.php""
Then I am on the expected version number**
",1,behat states a feature file may include tags at scenario level behat states a feature file may include tags at scenario level and at feature level when you run execute it only picks up tags at scenario level in the following example it picks up the pl tag but not the smoke setup or javascript tag set at the feature level smoke setup javascript feature smoke test site setup background given admin login with profile settings pl scenario pl check you are on the correct version of totara when i go direct to admin index php then i am on the expected version number ,1
4393,16455857638.0,IssuesEvent,2021-05-21 12:28:51,mozilla-mobile/focus-ios,https://api.github.com/repos/mozilla-mobile/focus-ios,closed,Add a CODEOWNERS to require reviews of to bitrise.yml changes,eng:automation,"Let's create a `.github/CODEOWNERS` file to require that any change to `bitrise.yml` must be approved by a specific group of people. I would like to suggest that that group for now is just @isabelrios and @st3fan
This goes together with the _Require review from Code Owners_ option that can be found under the _Branch Protection_ settings. (Currently enabled)
This adds a good safeguard and makes sure no accidental changes to our CI configuration happen.
https://docs.github.com/en/github/creating-cloning-and-archiving-repositories/about-code-owners
",1.0,"Add a CODEOWNERS to require reviews of to bitrise.yml changes - Let's create a `.github/CODEOWNERS` file to require that any change to `bitrise.yml` must be approved by a specific group of people. I would like to suggest that that group for now is just @isabelrios and @st3fan
This goes together with the _Require review from Code Owners_ option that can be found under the _Branch Protection_ settings. (Currently enabled)
This adds a good safeguard and makes sure no accidental changes to our CI configuration happen.
https://docs.github.com/en/github/creating-cloning-and-archiving-repositories/about-code-owners
",1,add a codeowners to require reviews of to bitrise yml changes let s create a github codeowners file to require that any change to bitrise yml must be approved by a specific group of people i would like to suggest that that group for now is just isabelrios and this goes together with the require review from code owners option that can be found under the branch protection settings currently enabled this adds a good safeguard and makes sure no accidental changes to our ci configuration happen ,1
2010,11259394928.0,IssuesEvent,2020-01-13 08:13:06,wazuh/wazuh-qa,https://api.github.com/repos/wazuh/wazuh-qa,closed,FIM v2.0: Improve parameter generator function to be applied to every test,automation component/fim,"## Description
For most of our test, we generate their configuration parameters through a function called `generate_params` (fim.py). We need to improve its functionality to be able to use it for every test and cover every possible case.
At least, we should cover these:
- [x] To be able to assign a different value to a same attribute and wildcard in a list for different modes.
- [x] To be able to append new parameters to every existing configuration.
- [x] To be able to generate as many configurations as wanted.",1.0,"FIM v2.0: Improve parameter generator function to be applied to every test - ## Description
For most of our test, we generate their configuration parameters through a function called `generate_params` (fim.py). We need to improve its functionality to be able to use it for every test and cover every possible case.
At least, we should cover these:
- [x] To be able to assign a different value to a same attribute and wildcard in a list for different modes.
- [x] To be able to append new parameters to every existing configuration.
- [x] To be able to generate as many configurations as wanted.",1,fim improve parameter generator function to be applied to every test description for most of our test we generate their configuration parameters through a function called generate params fim py we need to improve its functionality to be able to use it for every test and cover every possible case at least we should cover these to be able to assign a different value to a same attribute and wildcard in a list for different modes to be able to append new parameters to every existing configuration to be able to generate as many configurations as wanted ,1
1836,10916537395.0,IssuesEvent,2019-11-21 13:33:47,zalando-incubator/kopf,https://api.github.com/repos/zalando-incubator/kopf,opened,Revise the e2e testing for pristine-clean environments,automation,"0.23 release got few unfortunate bugs (#251 #249) sneaked into master and the releases: for cluster-scoped resources in a namespaces operators, and for peering auto-detection.
Despite 0.23 contained a lot of massive refactorings, and the failures were expected to happen to some extent, they were not noticed neither by the unit-tests, nor by the e2e test, nor by our internal usage of the operators with 0.23rcX versions.
This, in turn, was caused by the tests setup: either the cases were absent (like the cluster-scoped resources in namespaced operators), or the environment was half-configured enough to cause false-positives (`zalando.org/v1` group existed due to `KopfExample` in the tests, thus passing the e2e tests for `KopfPeering` resources, but not in the clusters of the 1st-time users, where it crashed with 404).
This highlights that the tests have become insufficient and the testing approach has to be revised to ensure better stability. A new setup should cover all the cases, and especially the quick-start scenario in a pristine clean environment without any assumptions.
Related: #13 ",1.0,"Revise the e2e testing for pristine-clean environments - 0.23 release got few unfortunate bugs (#251 #249) sneaked into master and the releases: for cluster-scoped resources in a namespaces operators, and for peering auto-detection.
Despite 0.23 contained a lot of massive refactorings, and the failures were expected to happen to some extent, they were not noticed neither by the unit-tests, nor by the e2e test, nor by our internal usage of the operators with 0.23rcX versions.
This, in turn, was caused by the tests setup: either the cases were absent (like the cluster-scoped resources in namespaced operators), or the environment was half-configured enough to cause false-positives (`zalando.org/v1` group existed due to `KopfExample` in the tests, thus passing the e2e tests for `KopfPeering` resources, but not in the clusters of the 1st-time users, where it crashed with 404).
This highlights that the tests have become insufficient and the testing approach has to be revised to ensure better stability. A new setup should cover all the cases, and especially the quick-start scenario in a pristine clean environment without any assumptions.
Related: #13 ",1,revise the testing for pristine clean environments release got few unfortunate bugs sneaked into master and the releases for cluster scoped resources in a namespaces operators and for peering auto detection despite contained a lot of massive refactorings and the failures were expected to happen to some extent they were not noticed neither by the unit tests nor by the test nor by our internal usage of the operators with versions this in turn was caused by the tests setup either the cases were absent like the cluster scoped resources in namespaced operators or the environment was half configured enough to cause false positives zalando org group existed due to kopfexample in the tests thus passing the tests for kopfpeering resources but not in the clusters of the time users where it crashed with this highlights that the tests have become insufficient and the testing approach has to be revised to ensure better stability a new setup should cover all the cases and especially the quick start scenario in a pristine clean environment without any assumptions related ,1
13080,3105749951.0,IssuesEvent,2015-08-31 22:39:36,rackerlabs/encore-ui,https://api.github.com/repos/rackerlabs/encore-ui,closed,Add feedback to Collapsible Table Filter pattern in styleguide,design documentation effort:medium priority:soon,"In relation to #915, the Collapsible Table Filter pattern needs some sort of feedback or notice that a data set is filtered when collapsed.
*Table is filtered by ORD, but there's no indication of applied filters when collapsed.*

",1.0,"Add feedback to Collapsible Table Filter pattern in styleguide - In relation to #915, the Collapsible Table Filter pattern needs some sort of feedback or notice that a data set is filtered when collapsed.
*Table is filtered by ORD, but there's no indication of applied filters when collapsed.*

",0,add feedback to collapsible table filter pattern in styleguide in relation to the collapsible table filter pattern needs some sort of feedback or notice that a data set is filtered when collapsed table is filtered by ord but there s no indication of applied filters when collapsed ,0
275463,8576026245.0,IssuesEvent,2018-11-12 19:04:31,mozilla/addons-server,https://api.github.com/repos/mozilla/addons-server,closed,Addons detail contains empty author list,component: api priority: p3 triaged type: papercut,"See https://addons.mozilla.org/api/v3/addons/addon/eyes-in-the-clouds/
Would be good to understand is this is an expected state or if there's an underlying issue to allow the author list to end up empty.
See also https://github.com/mozilla/addons-frontend/issues/2073",1.0,"Addons detail contains empty author list - See https://addons.mozilla.org/api/v3/addons/addon/eyes-in-the-clouds/
Would be good to understand is this is an expected state or if there's an underlying issue to allow the author list to end up empty.
See also https://github.com/mozilla/addons-frontend/issues/2073",0,addons detail contains empty author list see would be good to understand is this is an expected state or if there s an underlying issue to allow the author list to end up empty see also ,0
77928,15569904819.0,IssuesEvent,2021-03-17 01:15:55,Killy85/MachineLearningExercises,https://api.github.com/repos/Killy85/MachineLearningExercises,opened,"CVE-2021-27923 (High) detected in Pillow-5.4.1-cp27-cp27mu-manylinux1_x86_64.whl, Pillow-6.0.0-cp27-cp27mu-manylinux1_x86_64.whl",security vulnerability,"## CVE-2021-27923 - High Severity Vulnerability
Vulnerable Libraries - Pillow-5.4.1-cp27-cp27mu-manylinux1_x86_64.whl, Pillow-6.0.0-cp27-cp27mu-manylinux1_x86_64.whl
Pillow before 8.1.1 allows attackers to cause a denial of service (memory consumption) because the reported size of a contained image is not properly checked for an ICO container, and thus an attempted memory allocation can be very large.
For more information on CVSS3 Scores, click here.
***
Step up your Open Source Security Game with WhiteSource [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)",True,"CVE-2021-27923 (High) detected in Pillow-5.4.1-cp27-cp27mu-manylinux1_x86_64.whl, Pillow-6.0.0-cp27-cp27mu-manylinux1_x86_64.whl - ## CVE-2021-27923 - High Severity Vulnerability
Vulnerable Libraries - Pillow-5.4.1-cp27-cp27mu-manylinux1_x86_64.whl, Pillow-6.0.0-cp27-cp27mu-manylinux1_x86_64.whl
Pillow before 8.1.1 allows attackers to cause a denial of service (memory consumption) because the reported size of a contained image is not properly checked for an ICO container, and thus an attempted memory allocation can be very large.
For more information on CVSS3 Scores, click here.
***
Step up your Open Source Security Game with WhiteSource [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)",0,cve high detected in pillow whl pillow whl cve high severity vulnerability vulnerable libraries pillow whl pillow whl pillow whl python imaging library fork library home page a href dependency hierarchy x pillow whl vulnerable library pillow whl python imaging library fork library home page a href dependency hierarchy x pillow whl vulnerable library vulnerability details pillow before allows attackers to cause a denial of service memory consumption because the reported size of a contained image is not properly checked for an ico container and thus an attempted memory allocation can be very large publish date url a href cvss score details base score metrics exploitability metrics attack vector network attack complexity low privileges required none user interaction none scope unchanged impact metrics confidentiality impact none integrity impact none availability impact high for more information on scores click a href step up your open source security game with whitesource ,0
3615,14146824825.0,IssuesEvent,2020-11-10 19:49:30,domoticafacilconjota/capitulos,https://api.github.com/repos/domoticafacilconjota/capitulos,opened,[AtoNodeRED] Mensaje a la APP o a Telegram de que Home Assistant ha iniciado,Automation a Node RED,"**Código de la automatización**
```
- id: '1604933637567'
alias: HA iniciado
description: ''
trigger:
- platform: homeassistant
event: start
condition: []
action:
- service: notify.mobile_app_redmi_note_7
data:
title: HA init
message: Home Assistant ha comenzado una nueva sesión a las {{ as_timestamp(now())|
timestamp_local}}.
mode: single
```
**Explicación de lo que hace actualmente la automatización**
Esta automatización envía una notificación a la APP de HA con el fecha y la hora de cuando a iniciado HA y funciona perfectamente.
**Notas del autor**
En node-red hHe creado un flow con el nodo ""event:all"" y en el ""event type"" he colocado ""homeassistant_start"" pero no me da salida al iniciar HA.",1.0,"[AtoNodeRED] Mensaje a la APP o a Telegram de que Home Assistant ha iniciado - **Código de la automatización**
```
- id: '1604933637567'
alias: HA iniciado
description: ''
trigger:
- platform: homeassistant
event: start
condition: []
action:
- service: notify.mobile_app_redmi_note_7
data:
title: HA init
message: Home Assistant ha comenzado una nueva sesión a las {{ as_timestamp(now())|
timestamp_local}}.
mode: single
```
**Explicación de lo que hace actualmente la automatización**
Esta automatización envía una notificación a la APP de HA con el fecha y la hora de cuando a iniciado HA y funciona perfectamente.
**Notas del autor**
En node-red hHe creado un flow con el nodo ""event:all"" y en el ""event type"" he colocado ""homeassistant_start"" pero no me da salida al iniciar HA.",1, mensaje a la app o a telegram de que home assistant ha iniciado código de la automatización id alias ha iniciado description trigger platform homeassistant event start condition action service notify mobile app redmi note data title ha init message home assistant ha comenzado una nueva sesión a las as timestamp now timestamp local mode single explicación de lo que hace actualmente la automatización esta automatización envía una notificación a la app de ha con el fecha y la hora de cuando a iniciado ha y funciona perfectamente notas del autor en node red hhe creado un flow con el nodo event all y en el event type he colocado homeassistant start pero no me da salida al iniciar ha ,1
485254,13963262854.0,IssuesEvent,2020-10-25 13:25:50,MSFS-Mega-Pack/MSFS2020-livery-manager,https://api.github.com/repos/MSFS-Mega-Pack/MSFS2020-livery-manager,closed,"[BUG] No scrollbar in ""Available Liveries""",area: livery installation bug priority: MEDIUM type: ui type: ux,"## Description
The tab Available Liveries does not show a scrollbar to scroll through the menu without using the scrollwheel on a mouse.
## To reproduce
1. Click on Available Liveries
## Environment
**Manager version:** 0.0.2
## Screenshots or videos
Click to expand

",1.0,"[BUG] No scrollbar in ""Available Liveries"" - ## Description
The tab Available Liveries does not show a scrollbar to scroll through the menu without using the scrollwheel on a mouse.
## To reproduce
1. Click on Available Liveries
## Environment
**Manager version:** 0.0.2
## Screenshots or videos
Click to expand

",0, no scrollbar in available liveries description the tab available liveries does not show a scrollbar to scroll through the menu without using the scrollwheel on a mouse to reproduce click on available liveries environment manager version screenshots or videos click to expand ,0
74734,25285500539.0,IssuesEvent,2022-11-16 18:56:38,primefaces/primefaces,https://api.github.com/repos/primefaces/primefaces,opened,TriStateCheckbox: Item label not aligned with checkbox,:lady_beetle: defect :bangbang: needs-triage,"### Describe the bug
It was correct on Primefaces 8 (tested on older version of primefaces-test), but it isn't from Primefaces 10 to 12 (latest primefaces-test). It's just 1 line to reproduce, so I didn't upload the reproducer, but I can if needed.
Code:
``
How it's displayed:

What`s expected:

### Reproducer
Code:
``
### Expected behavior

### PrimeFaces edition
Community
### PrimeFaces version
10-12
### Theme
Any
### JSF implementation
Mojarra
### JSF version
2.2.x
### Java version
11
### Browser(s)
Any",1.0,"TriStateCheckbox: Item label not aligned with checkbox - ### Describe the bug
It was correct on Primefaces 8 (tested on older version of primefaces-test), but it isn't from Primefaces 10 to 12 (latest primefaces-test). It's just 1 line to reproduce, so I didn't upload the reproducer, but I can if needed.
Code:
``
How it's displayed:

What`s expected:

### Reproducer
Code:
``
### Expected behavior

### PrimeFaces edition
Community
### PrimeFaces version
10-12
### Theme
Any
### JSF implementation
Mojarra
### JSF version
2.2.x
### Java version
11
### Browser(s)
Any",0,tristatecheckbox item label not aligned with checkbox describe the bug it was correct on primefaces tested on older version of primefaces test but it isn t from primefaces to latest primefaces test it s just line to reproduce so i didn t upload the reproducer but i can if needed code how it s displayed what s expected reproducer code expected behavior primefaces edition community primefaces version theme any jsf implementation mojarra jsf version x java version browser s any,0
26239,26579960216.0,IssuesEvent,2023-01-22 10:20:02,godotengine/godot,https://api.github.com/repos/godotengine/godot,closed,Can't edit TileMap or TileSet after opening its shader,bug topic:editor confirmed usability topic:2d,"**Godot version:**
3.1
**OS/device including version:**
Windows 10 Pro 64-bit
**Issue description:**
Once you open a Shader on a TileMap, you can no longer modify the tiles. This happens because the Shader panel is shown instead of the TileMap panel. This persists even if you close and reopen Godot. It took me a while to figure out how to resume editing the TileMap, either by:
A. Remove the Shader from the TileMap; or
B. Collapse the Shader in the Inspector the navigate away and then back to editing the TileMap.
**Steps to reproduce:**
1. Create a TileMap
2. Add a ShaderMaterial to the TileMap
3. Add a Shader to the ShaderMaterial
4. Open the Shader
**Minimal reproduction project:**
[TileMapShaderBug.zip](https://github.com/godotengine/godot/files/3582276/TileMapShaderBug.zip)
",True,"Can't edit TileMap or TileSet after opening its shader - **Godot version:**
3.1
**OS/device including version:**
Windows 10 Pro 64-bit
**Issue description:**
Once you open a Shader on a TileMap, you can no longer modify the tiles. This happens because the Shader panel is shown instead of the TileMap panel. This persists even if you close and reopen Godot. It took me a while to figure out how to resume editing the TileMap, either by:
A. Remove the Shader from the TileMap; or
B. Collapse the Shader in the Inspector the navigate away and then back to editing the TileMap.
**Steps to reproduce:**
1. Create a TileMap
2. Add a ShaderMaterial to the TileMap
3. Add a Shader to the ShaderMaterial
4. Open the Shader
**Minimal reproduction project:**
[TileMapShaderBug.zip](https://github.com/godotengine/godot/files/3582276/TileMapShaderBug.zip)
",0,can t edit tilemap or tileset after opening its shader godot version os device including version windows pro bit issue description once you open a shader on a tilemap you can no longer modify the tiles this happens because the shader panel is shown instead of the tilemap panel this persists even if you close and reopen godot it took me a while to figure out how to resume editing the tilemap either by a remove the shader from the tilemap or b collapse the shader in the inspector the navigate away and then back to editing the tilemap steps to reproduce create a tilemap add a shadermaterial to the tilemap add a shader to the shadermaterial open the shader minimal reproduction project ,0
21171,3466938659.0,IssuesEvent,2015-12-22 08:26:42,Ryzhehvost/keyla,https://api.github.com/repos/Ryzhehvost/keyla,closed,Tray icon not correct,auto-migrated Can't reproduce duplicate Priority-Medium Type-Defect,"```
What steps will reproduce the problem?
1. Open Windows Explorer or rename file on the desktop or use Search option in
the Start menu
2.
3.
What is the expected output? What do you see instead?
The layout is changed when pressing (Ctrl+Shift on my PC) but the flag is not
correct in the keyla tray icon. It does not catch the change. Even pressing
shortcut defined in keyla does not change the icon (but the key layout is
changed)
What version of the product are you using? On what operating system?
0.1.9.0. On Win7 64
Please provide any additional information below.
Would be nice to be able to switch to the next keyboard by clicking once on the
icon in the tray
```
Original issue reported on code.google.com by `okt...@gmail.com` on 21 Apr 2013 at 4:49",1.0,"Tray icon not correct - ```
What steps will reproduce the problem?
1. Open Windows Explorer or rename file on the desktop or use Search option in
the Start menu
2.
3.
What is the expected output? What do you see instead?
The layout is changed when pressing (Ctrl+Shift on my PC) but the flag is not
correct in the keyla tray icon. It does not catch the change. Even pressing
shortcut defined in keyla does not change the icon (but the key layout is
changed)
What version of the product are you using? On what operating system?
0.1.9.0. On Win7 64
Please provide any additional information below.
Would be nice to be able to switch to the next keyboard by clicking once on the
icon in the tray
```
Original issue reported on code.google.com by `okt...@gmail.com` on 21 Apr 2013 at 4:49",0,tray icon not correct what steps will reproduce the problem open windows explorer or rename file on the desktop or use search option in the start menu what is the expected output what do you see instead the layout is changed when pressing ctrl shift on my pc but the flag is not correct in the keyla tray icon it does not catch the change even pressing shortcut defined in keyla does not change the icon but the key layout is changed what version of the product are you using on what operating system on please provide any additional information below would be nice to be able to switch to the next keyboard by clicking once on the icon in the tray original issue reported on code google com by okt gmail com on apr at ,0
35144,6417293855.0,IssuesEvent,2017-08-08 16:26:02,F5Networks/f5-openstack-lbaasv1,https://api.github.com/repos/F5Networks/f5-openstack-lbaasv1,closed,Version numbers don't match in installation instructions (index.rst),bug documentation P4 S5,"#### OpenStack Release
Liberty
#### Description
On http://f5-openstack-lbaasv1.readthedocs.io/en/liberty/: the version numbers shown in the example under quick start don't correspond to the release that provides support for liberty.
",1.0,"Version numbers don't match in installation instructions (index.rst) - #### OpenStack Release
Liberty
#### Description
On http://f5-openstack-lbaasv1.readthedocs.io/en/liberty/: the version numbers shown in the example under quick start don't correspond to the release that provides support for liberty.
",0,version numbers don t match in installation instructions index rst openstack release liberty description on the version numbers shown in the example under quick start don t correspond to the release that provides support for liberty ,0
6750,6584198368.0,IssuesEvent,2017-09-13 09:18:43,dweitz43/nhl,https://api.github.com/repos/dweitz43/nhl,opened,Consume MySportsFeeds Data into Server-side Database,backend db infrastructure js needs research,the data from the third-party datasource must be consumed by the server-side database #2. This requires more research to figure out the most elegant/efficient solution...,1.0,Consume MySportsFeeds Data into Server-side Database - the data from the third-party datasource must be consumed by the server-side database #2. This requires more research to figure out the most elegant/efficient solution...,0,consume mysportsfeeds data into server side database the data from the third party datasource must be consumed by the server side database this requires more research to figure out the most elegant efficient solution ,0
200909,22916021712.0,IssuesEvent,2022-07-17 01:10:47,ShaikUsaf/linux-4.19.72_CVE-2020-14356,https://api.github.com/repos/ShaikUsaf/linux-4.19.72_CVE-2020-14356,opened,CVE-2022-2380 (Medium) detected in linuxlinux-4.19.238,security vulnerability,"## CVE-2022-2380 - Medium Severity Vulnerability
Vulnerable Library - linuxlinux-4.19.238
The Linux kernel was found vulnerable out of bounds memory access in the drivers/video/fbdev/sm712fb.c:smtcfb_read() function. The vulnerability could result in local attackers being able to crash the kernel.
***
Step up your Open Source Security Game with Mend [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)",True,"CVE-2022-2380 (Medium) detected in linuxlinux-4.19.238 - ## CVE-2022-2380 - Medium Severity Vulnerability
Vulnerable Library - linuxlinux-4.19.238
The Linux kernel was found vulnerable out of bounds memory access in the drivers/video/fbdev/sm712fb.c:smtcfb_read() function. The vulnerability could result in local attackers being able to crash the kernel.
***
Step up your Open Source Security Game with Mend [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)",0,cve medium detected in linuxlinux cve medium severity vulnerability vulnerable library linuxlinux the linux kernel library home page a href found in head commit a href found in base branch master vulnerable source files drivers video fbdev c drivers video fbdev c vulnerability details the linux kernel was found vulnerable out of bounds memory access in the drivers video fbdev c smtcfb read function the vulnerability could result in local attackers being able to crash the kernel publish date url a href cvss score details base score metrics exploitability metrics attack vector local attack complexity low privileges required low user interaction none scope unchanged impact metrics confidentiality impact low integrity impact none availability impact high for more information on scores click a href suggested fix type upgrade version origin a href release date fix resolution step up your open source security game with mend ,0
9385,28139010587.0,IssuesEvent,2023-04-01 18:23:01,awslabs/aws-lambda-powertools-typescript,https://api.github.com/repos/awslabs/aws-lambda-powertools-typescript,closed,Maintenance: update Lerna to latest,area/automation type/internal status/confirmed,"### Summary
The project currently uses Lerna version `4.0.0` which fairly old and can be updated to the latest version.
### Why is this needed?
This version is over a year old and is behind two major versions and as such is not getting updates of any kind anymore.
### Which area does this relate to?
Automation
### Solution
Update `lerna` to its latest version and update the `make-release` workflow accordingly.
### Acknowledgment
- [X] This request meets [Lambda Powertools Tenets](https://awslabs.github.io/aws-lambda-powertools-typescript/latest/#tenets)
- [ ] Should this be considered in other Lambda Powertools languages? i.e. [Python](https://github.com/awslabs/aws-lambda-powertools-python/), [Java](https://github.com/awslabs/aws-lambda-powertools-java/), and [.NET](https://github.com/awslabs/aws-lambda-powertools-dotnet/)
### Future readers
Please react with 👍 and your use case to help us understand customer demand.",1.0,"Maintenance: update Lerna to latest - ### Summary
The project currently uses Lerna version `4.0.0` which fairly old and can be updated to the latest version.
### Why is this needed?
This version is over a year old and is behind two major versions and as such is not getting updates of any kind anymore.
### Which area does this relate to?
Automation
### Solution
Update `lerna` to its latest version and update the `make-release` workflow accordingly.
### Acknowledgment
- [X] This request meets [Lambda Powertools Tenets](https://awslabs.github.io/aws-lambda-powertools-typescript/latest/#tenets)
- [ ] Should this be considered in other Lambda Powertools languages? i.e. [Python](https://github.com/awslabs/aws-lambda-powertools-python/), [Java](https://github.com/awslabs/aws-lambda-powertools-java/), and [.NET](https://github.com/awslabs/aws-lambda-powertools-dotnet/)
### Future readers
Please react with 👍 and your use case to help us understand customer demand.",1,maintenance update lerna to latest summary the project currently uses lerna version which fairly old and can be updated to the latest version why is this needed this version is over a year old and is behind two major versions and as such is not getting updates of any kind anymore which area does this relate to automation solution update lerna to its latest version and update the make release workflow accordingly acknowledgment this request meets should this be considered in other lambda powertools languages i e and future readers please react with 👍 and your use case to help us understand customer demand ,1
11007,4128041026.0,IssuesEvent,2016-06-10 02:54:30,TEAMMATES/teammates,https://api.github.com/repos/TEAMMATES/teammates,closed,Re-organize FileHelper classes,a-CodeQuality m.Aspect,"There are two `FileHelper`s, one for production (reading input stream etc.) and one for non-production (reading files etc.), but they're not very well-organized right now. Also, there are some self-defined functions that can actually fit in either one of these classes.",1.0,"Re-organize FileHelper classes - There are two `FileHelper`s, one for production (reading input stream etc.) and one for non-production (reading files etc.), but they're not very well-organized right now. Also, there are some self-defined functions that can actually fit in either one of these classes.",0,re organize filehelper classes there are two filehelper s one for production reading input stream etc and one for non production reading files etc but they re not very well organized right now also there are some self defined functions that can actually fit in either one of these classes ,0
8846,27172324094.0,IssuesEvent,2023-02-17 20:40:33,OneDrive/onedrive-api-docs,https://api.github.com/repos/OneDrive/onedrive-api-docs,closed,Onedrive Api search request always returns an empty collection,type:bug area:Search status:backlogged automation:Closed,"> Thank you for reporting an issue or suggesting an enhancement. We appreciate your feedback - to help the team to understand your needs, please complete the below template to ensure we have the necessary details to assist you.
>
> _Submission Guidelines:_
> - Questions and bugs are welcome, please let us know what's on your mind.
> - If you are reporting an issue around any of the documents or articles, please provide clear reference(s) to the specific file(s) or URL('s).
> - Remember to include sufficient details and context.
> - If you have multiple issues, please submit them as separate issues so we can track resolution.
>
> _(DELETE THIS PARAGRAPH AFTER READING)_
>
#### Category
- [x] Question
- [ ] Documentation issue
- [ ] Bug
> For the above list, an empty checkbox is [ ]. A checked checkbox is [x] with no space between the brackets. Use the `PREVIEW` tab at the top right to preview the rendering before submitting your issue.
>
> If you are planning to share a new feature request (enhancement / suggestion), please use the OneDrive Developer Platform UserVoice at http://aka.ms/od-dev-uservoice, or the SharePoint Developer Platform UserVoice at http://aka.ms/sp-dev-uservoice.
> If you have a question about Azure Active Directory, outside of issues with the documentation provided in the OneDrive Developer Center, please ask it here: https://stackoverflow.com/questions/tagged/azure-active-directory
>
> _(DELETE THIS PARAGRAPH AFTER READING)_
>
#### Expected or Desired Behavior
> If you are reporting a bug, please describe the expected behavior.
>
> _(DELETE THIS PARAGRAPH AFTER READING)_
>
#### Observed Behavior
> If you are reporting a bug, please describe the observed behavior.
>
> Please also provide the following response headers corresponding to your request(s):
> - Date (in UTC, please)
> - request-id
> - SPRequestGuid (for requests made to OneDrive for Business)
>
> _(DELETE THIS PARAGRAPH AFTER READING)_
>
#### Steps to Reproduce
> If you are reporting a bug, please describe the steps to reproduce the bug in sufficient detail for another person to be able to reproduce it.
>
> _(DELETE THIS PARAGRAPH AFTER READING)_
>
Thank you.
[ ]: http://aka.ms/onedrive-api-issues
[x]: http://aka.ms/onedrive-api-issues",1.0,"Onedrive Api search request always returns an empty collection - > Thank you for reporting an issue or suggesting an enhancement. We appreciate your feedback - to help the team to understand your needs, please complete the below template to ensure we have the necessary details to assist you.
>
> _Submission Guidelines:_
> - Questions and bugs are welcome, please let us know what's on your mind.
> - If you are reporting an issue around any of the documents or articles, please provide clear reference(s) to the specific file(s) or URL('s).
> - Remember to include sufficient details and context.
> - If you have multiple issues, please submit them as separate issues so we can track resolution.
>
> _(DELETE THIS PARAGRAPH AFTER READING)_
>
#### Category
- [x] Question
- [ ] Documentation issue
- [ ] Bug
> For the above list, an empty checkbox is [ ]. A checked checkbox is [x] with no space between the brackets. Use the `PREVIEW` tab at the top right to preview the rendering before submitting your issue.
>
> If you are planning to share a new feature request (enhancement / suggestion), please use the OneDrive Developer Platform UserVoice at http://aka.ms/od-dev-uservoice, or the SharePoint Developer Platform UserVoice at http://aka.ms/sp-dev-uservoice.
> If you have a question about Azure Active Directory, outside of issues with the documentation provided in the OneDrive Developer Center, please ask it here: https://stackoverflow.com/questions/tagged/azure-active-directory
>
> _(DELETE THIS PARAGRAPH AFTER READING)_
>
#### Expected or Desired Behavior
> If you are reporting a bug, please describe the expected behavior.
>
> _(DELETE THIS PARAGRAPH AFTER READING)_
>
#### Observed Behavior
> If you are reporting a bug, please describe the observed behavior.
>
> Please also provide the following response headers corresponding to your request(s):
> - Date (in UTC, please)
> - request-id
> - SPRequestGuid (for requests made to OneDrive for Business)
>
> _(DELETE THIS PARAGRAPH AFTER READING)_
>
#### Steps to Reproduce
> If you are reporting a bug, please describe the steps to reproduce the bug in sufficient detail for another person to be able to reproduce it.
>
> _(DELETE THIS PARAGRAPH AFTER READING)_
>
Thank you.
[ ]: http://aka.ms/onedrive-api-issues
[x]: http://aka.ms/onedrive-api-issues",1,onedrive api search request always returns an empty collection thank you for reporting an issue or suggesting an enhancement we appreciate your feedback to help the team to understand your needs please complete the below template to ensure we have the necessary details to assist you submission guidelines questions and bugs are welcome please let us know what s on your mind if you are reporting an issue around any of the documents or articles please provide clear reference s to the specific file s or url s remember to include sufficient details and context if you have multiple issues please submit them as separate issues so we can track resolution delete this paragraph after reading category question documentation issue bug for the above list an empty checkbox is a checked checkbox is with no space between the brackets use the preview tab at the top right to preview the rendering before submitting your issue if you are planning to share a new feature request enhancement suggestion please use the onedrive developer platform uservoice at or the sharepoint developer platform uservoice at if you have a question about azure active directory outside of issues with the documentation provided in the onedrive developer center please ask it here delete this paragraph after reading expected or desired behavior if you are reporting a bug please describe the expected behavior delete this paragraph after reading observed behavior if you are reporting a bug please describe the observed behavior please also provide the following response headers corresponding to your request s date in utc please request id sprequestguid for requests made to onedrive for business delete this paragraph after reading steps to reproduce if you are reporting a bug please describe the steps to reproduce the bug in sufficient detail for another person to be able to reproduce it delete this paragraph after reading thank you ,1
7330,24648554123.0,IssuesEvent,2022-10-17 16:38:08,bcgov/api-services-portal,https://api.github.com/repos/bcgov/api-services-portal,closed,Cypress Test - Consumer detail - edit labels,automation,"1. Manage/Edit labels spec
1.1 authenticates Mark (Access-Manager)
1.2 Navigate to Consumer page and filter the product
1.3 Click on the first consumer
1.4 Verify that labels can be deleted
1.5 Verify that labels can be updated
1.6 Verify that labels can be added
",1.0,"Cypress Test - Consumer detail - edit labels - 1. Manage/Edit labels spec
1.1 authenticates Mark (Access-Manager)
1.2 Navigate to Consumer page and filter the product
1.3 Click on the first consumer
1.4 Verify that labels can be deleted
1.5 Verify that labels can be updated
1.6 Verify that labels can be added
",1,cypress test consumer detail edit labels manage edit labels spec authenticates mark access manager navigate to consumer page and filter the product click on the first consumer verify that labels can be deleted verify that labels can be updated verify that labels can be added ,1
1628,10471758732.0,IssuesEvent,2019-09-23 08:39:30,big-neon/bn-web,https://api.github.com/repos/big-neon/bn-web,opened,Automation: Big Neon : Test 27: Order Management: Order Page navigation,Automation,"**Pre-conditions:**
- User should have admin access
- User should be logged into Big Neon
- User should have created an event
- User should have purchased tickets as a consumer for the event
**Steps:**
1. Within the ""Events"" dashboard on Box Office, select the 3 dots at the top right corner of the event for which tickets have been purchased.
2. From the drop down list that appears, select the option; ""Dashboard""
3. User should be redirected to the dashboard view.
4. Within the dashboard, select the option ""Orders""
5. Drop down appears
6. Select ""Manage Orders""
7. User should be redirected to the Orders page
8. user should see a list of all orders/purchases displayed for the selected event
9. ",1.0,"Automation: Big Neon : Test 27: Order Management: Order Page navigation - **Pre-conditions:**
- User should have admin access
- User should be logged into Big Neon
- User should have created an event
- User should have purchased tickets as a consumer for the event
**Steps:**
1. Within the ""Events"" dashboard on Box Office, select the 3 dots at the top right corner of the event for which tickets have been purchased.
2. From the drop down list that appears, select the option; ""Dashboard""
3. User should be redirected to the dashboard view.
4. Within the dashboard, select the option ""Orders""
5. Drop down appears
6. Select ""Manage Orders""
7. User should be redirected to the Orders page
8. user should see a list of all orders/purchases displayed for the selected event
9. ",1,automation big neon test order management order page navigation pre conditions user should have admin access user should be logged into big neon user should have created an event user should have purchased tickets as a consumer for the event steps within the events dashboard on box office select the dots at the top right corner of the event for which tickets have been purchased from the drop down list that appears select the option dashboard user should be redirected to the dashboard view within the dashboard select the option orders drop down appears select manage orders user should be redirected to the orders page user should see a list of all orders purchases displayed for the selected event ,1
7449,24900295725.0,IssuesEvent,2022-10-28 20:05:11,red-hat-storage/ocs-ci,https://api.github.com/repos/red-hat-storage/ocs-ci,closed,Change the namespace selection to `openshift-storage` before searching/selecting OCS operator,bug Medium Priority ui_automation lifecycle/stale,"Failure is seen on Run ID 1643910120
Issue is in function after `logger.info(""Search OCS operator installed"")` line-
```
def verify_ocs_operator_tabs(self):
""""""
Verify OCS Operator Tabs
""""""
self.navigate_installed_operators_page()
logger.info(""Search OCS operator installed"")
self.do_send_keys(
locator=self.validation_loc[""search_ocs_installed""],
text=""OpenShift Container Storage"",
)
```",1.0,"Change the namespace selection to `openshift-storage` before searching/selecting OCS operator - Failure is seen on Run ID 1643910120
Issue is in function after `logger.info(""Search OCS operator installed"")` line-
```
def verify_ocs_operator_tabs(self):
""""""
Verify OCS Operator Tabs
""""""
self.navigate_installed_operators_page()
logger.info(""Search OCS operator installed"")
self.do_send_keys(
locator=self.validation_loc[""search_ocs_installed""],
text=""OpenShift Container Storage"",
)
```",1,change the namespace selection to openshift storage before searching selecting ocs operator failure is seen on run id issue is in function after logger info search ocs operator installed line def verify ocs operator tabs self verify ocs operator tabs self navigate installed operators page logger info search ocs operator installed self do send keys locator self validation loc text openshift container storage ,1
4510,16745951549.0,IssuesEvent,2021-06-11 15:31:35,mozilla-mobile/focus-ios,https://api.github.com/repos/mozilla-mobile/focus-ios,opened,"Refactor waitforExistence, waitforHittable, waitForEnable methods",eng:automation,"Let's refactor these methods to remove the 30 secs expectation and add a waitFor general method that can be used by all of them.
Hopefully this may fix issue #1928 ",1.0,"Refactor waitforExistence, waitforHittable, waitForEnable methods - Let's refactor these methods to remove the 30 secs expectation and add a waitFor general method that can be used by all of them.
Hopefully this may fix issue #1928 ",1,refactor waitforexistence waitforhittable waitforenable methods let s refactor these methods to remove the secs expectation and add a waitfor general method that can be used by all of them hopefully this may fix issue ,1
190782,15256111179.0,IssuesEvent,2021-02-20 18:48:17,AbelardoCuesta/git_flow_practice,https://api.github.com/repos/AbelardoCuesta/git_flow_practice,closed,Un commit que no sigue la convención de código o arreglo a realizar,documentation,"L
El último commit tiene el siguiente mensaje:
`Se añade codigo base`
Este issue es solo un recordatorio de la convención de comentarios en los commits y puede ser cerrado.",1.0,"Un commit que no sigue la convención de código o arreglo a realizar - L
El último commit tiene el siguiente mensaje:
`Se añade codigo base`
Este issue es solo un recordatorio de la convención de comentarios en los commits y puede ser cerrado.",0,un commit que no sigue la convención de código o arreglo a realizar l el último commit tiene el siguiente mensaje se añade codigo base este issue es solo un recordatorio de la convención de comentarios en los commits y puede ser cerrado ,0
7602,25246437968.0,IssuesEvent,2022-11-15 11:26:29,ita-social-projects/TeachUA,https://api.github.com/repos/ita-social-projects/TeachUA,closed,[Гуртки tab] Page content disappears after going to 54+ page,bug Frontend UI Priority: High Automation,"**Environment:** Windows 11, Google Chrome 106.0.5249.91 (Official Build) (64-bit)
**Reproducible:** always.
**Build found:** commit [0008581](https://github.com/ita-social-projects/TeachUA/commit/0008581a5a00a1ddc3514ab25f9f5745e166e26d)
**Preconditions**
1. Go to the webpage: https://speak-ukrainian.org.ua/dev/
**Steps to reproduce**
1. Go to 'Гуртки' tab.
2. Scroll down to pagination.
3. Click on page number 58.
**Actual result**
All page content disappears.
https://user-images.githubusercontent.com/82941067/193587440-c7493425-dfc0-49a9-9294-7e931a4dff03.mp4
**Expected result**
1. Club components that are present on page number 58 should appear.
2. '>' button should be disabled.
",1.0,"[Гуртки tab] Page content disappears after going to 54+ page - **Environment:** Windows 11, Google Chrome 106.0.5249.91 (Official Build) (64-bit)
**Reproducible:** always.
**Build found:** commit [0008581](https://github.com/ita-social-projects/TeachUA/commit/0008581a5a00a1ddc3514ab25f9f5745e166e26d)
**Preconditions**
1. Go to the webpage: https://speak-ukrainian.org.ua/dev/
**Steps to reproduce**
1. Go to 'Гуртки' tab.
2. Scroll down to pagination.
3. Click on page number 58.
**Actual result**
All page content disappears.
https://user-images.githubusercontent.com/82941067/193587440-c7493425-dfc0-49a9-9294-7e931a4dff03.mp4
**Expected result**
1. Club components that are present on page number 58 should appear.
2. '>' button should be disabled.
",1, page content disappears after going to page environment windows google chrome official build bit reproducible always build found commit preconditions go to the webpage steps to reproduce go to гуртки tab scroll down to pagination click on page number actual result all page content disappears expected result club components that are present on page number should appear button should be disabled ,1
41673,10563560586.0,IssuesEvent,2019-10-04 21:20:07,department-of-veterans-affairs/va.gov-team,https://api.github.com/repos/department-of-veterans-affairs/va.gov-team,closed,[KEYBOARD]: Map - Focus is moved to the search results when users press arrow keys or +/-,508-defect-2 508/Accessibility facility locator frontend vsa vsa-global-ux,"## Issue
Focus is being moved from the map to the search results when keyboard users press an arrow key to shift the map in a direction, or when users press plus or minus to zoom the map. This was noted as an SC 3.3.2 violation. Animated GIF attached below.
## Related Issues
* https://app.zenhub.com/workspaces/vsp-5cedc9cce6e3335dc5a49fc4/issues/department-of-veterans-affairs/va.gov-team/491
* https://app.zenhub.com/workspaces/vsp-5cedc9cce6e3335dc5a49fc4/issues/department-of-veterans-affairs/va.gov-team/515
## Audit Finding
* Note 5, Defect 1 of 2
## Acceptance Criteria
* As a keyboard user, I would like the map to retain focus when I press an arrow key, or any other keyboard shortcut. The number of results should update as before, but the yellow focus halo should stay on the map.
## Environment
* MacOS Mojave
* Chrome latest
* https://staging.va.gov/find-locations/
## WCAG or Vendor Guidance (optional)
* [On Input: Understanding SC 3.2.2](https://www.w3.org/TR/UNDERSTANDING-WCAG20/consistent-behavior-unpredictable-change.html)
## Screenshots or Trace Logs
",1.0,"[KEYBOARD]: Map - Focus is moved to the search results when users press arrow keys or +/- - ## Issue
Focus is being moved from the map to the search results when keyboard users press an arrow key to shift the map in a direction, or when users press plus or minus to zoom the map. This was noted as an SC 3.3.2 violation. Animated GIF attached below.
## Related Issues
* https://app.zenhub.com/workspaces/vsp-5cedc9cce6e3335dc5a49fc4/issues/department-of-veterans-affairs/va.gov-team/491
* https://app.zenhub.com/workspaces/vsp-5cedc9cce6e3335dc5a49fc4/issues/department-of-veterans-affairs/va.gov-team/515
## Audit Finding
* Note 5, Defect 1 of 2
## Acceptance Criteria
* As a keyboard user, I would like the map to retain focus when I press an arrow key, or any other keyboard shortcut. The number of results should update as before, but the yellow focus halo should stay on the map.
## Environment
* MacOS Mojave
* Chrome latest
* https://staging.va.gov/find-locations/
## WCAG or Vendor Guidance (optional)
* [On Input: Understanding SC 3.2.2](https://www.w3.org/TR/UNDERSTANDING-WCAG20/consistent-behavior-unpredictable-change.html)
## Screenshots or Trace Logs
",0, map focus is moved to the search results when users press arrow keys or issue focus is being moved from the map to the search results when keyboard users press an arrow key to shift the map in a direction or when users press plus or minus to zoom the map this was noted as an sc violation animated gif attached below related issues audit finding note defect of acceptance criteria as a keyboard user i would like the map to retain focus when i press an arrow key or any other keyboard shortcut the number of results should update as before but the yellow focus halo should stay on the map environment macos mojave chrome latest wcag or vendor guidance optional screenshots or trace logs ,0
2892,12746209508.0,IssuesEvent,2020-06-26 15:32:01,chavarera/python-mini-projects,https://api.github.com/repos/chavarera/python-mini-projects,opened,Write a program to download multiple images,Automation beginner,"**Problem Statement**
write a program that accept category from user and download n no of images to local system",1.0,"Write a program to download multiple images - **Problem Statement**
write a program that accept category from user and download n no of images to local system",1,write a program to download multiple images problem statement write a program that accept category from user and download n no of images to local system,1
5986,21787901305.0,IssuesEvent,2022-05-14 12:41:10,ThinkingEngine-net/PickleTestSuite,https://api.github.com/repos/ThinkingEngine-net/PickleTestSuite,closed,Edge Chromium - Performance Logging functions not working,bug Browser Automation,"Current disable, so page status not available. Needs to be fixed.",1.0,"Edge Chromium - Performance Logging functions not working - Current disable, so page status not available. Needs to be fixed.",1,edge chromium performance logging functions not working current disable so page status not available needs to be fixed ,1
6813,23939521762.0,IssuesEvent,2022-09-11 18:03:47,smcnab1/op-question-mark,https://api.github.com/repos/smcnab1/op-question-mark,closed,[FR] Implementation of Apple Shortcuts,Status: Review Needed Type: Enhancement Priority: Low For: Automations,"**Describe the solution you'd like**
Implementation of Apple Shortcuts to trigger actions in home assistant
**Describe alternatives you've considered**
Home Assistant Widget for iOS, also yet to fully investigate
**Additional context**
Used to trigger alarm/scenes/scripts from phone Home Screen. Make life easier for wife to access",1.0,"[FR] Implementation of Apple Shortcuts - **Describe the solution you'd like**
Implementation of Apple Shortcuts to trigger actions in home assistant
**Describe alternatives you've considered**
Home Assistant Widget for iOS, also yet to fully investigate
**Additional context**
Used to trigger alarm/scenes/scripts from phone Home Screen. Make life easier for wife to access",1, implementation of apple shortcuts describe the solution you d like implementation of apple shortcuts to trigger actions in home assistant describe alternatives you ve considered home assistant widget for ios also yet to fully investigate additional context used to trigger alarm scenes scripts from phone home screen make life easier for wife to access,1
306522,26476101271.0,IssuesEvent,2023-01-17 11:14:31,woocommerce/woocommerce,https://api.github.com/repos/woocommerce/woocommerce,opened,[HPOS]: Rename `cot` to `hpos` in our workflows ,type: task focus: custom order tables focus: smoke tests,"### Describe the solution you'd like
Our workflows are referencing `cot` in many places when they should be referencing the new name `hpos`.
This task is to update the names and references so they use the more appropriate `hpos` reference, being mindful of the impacts 😊
### Describe alternatives you've considered
n/a
### Additional context
_No response_",1.0,"[HPOS]: Rename `cot` to `hpos` in our workflows - ### Describe the solution you'd like
Our workflows are referencing `cot` in many places when they should be referencing the new name `hpos`.
This task is to update the names and references so they use the more appropriate `hpos` reference, being mindful of the impacts 😊
### Describe alternatives you've considered
n/a
### Additional context
_No response_",0, rename cot to hpos in our workflows describe the solution you d like our workflows are referencing cot in many places when they should be referencing the new name hpos this task is to update the names and references so they use the more appropriate hpos reference being mindful of the impacts 😊 describe alternatives you ve considered n a additional context no response ,0
4181,15736267478.0,IssuesEvent,2021-03-30 00:15:22,aws/aws-cli,https://api.github.com/repos/aws/aws-cli,closed,Sync missing files,automation-exempt needs-reproduction s3sync,"Server: `EC2 linux`
Version: `aws-cli/1.16.108 Python/2.7.15 Linux/4.9.62-21.56.amzn1.x86_64 botocore/1.12.98`
After `aws s3 sync` running over 270T of data I lost few GB of files. Sync didn't copy files with special characters at all.
Example of file `/data/company/storage/projects/1013815/3.Company Estimates B. Estimates`
Had to use `cp -R -n`",1.0,"Sync missing files - Server: `EC2 linux`
Version: `aws-cli/1.16.108 Python/2.7.15 Linux/4.9.62-21.56.amzn1.x86_64 botocore/1.12.98`
After `aws s3 sync` running over 270T of data I lost few GB of files. Sync didn't copy files with special characters at all.
Example of file `/data/company/storage/projects/1013815/3.Company Estimates B. Estimates`
Had to use `cp -R -n`",1,sync missing files server linux version aws cli python linux botocore after aws sync running over of data i lost few gb of files sync didn t copy files with special characters at all example of file data company storage projects company estimates b estimates had to use cp r n ,1
5508,19829704262.0,IssuesEvent,2022-01-20 10:39:43,longhorn/longhorn,https://api.github.com/repos/longhorn/longhorn,closed,[FEATURE] Mutating/Validating admission webhook,kind/enhancement priority/1 require/automation-e2e area/api,"**Is your feature request related to a problem? Please describe.**
This is part of #791, besides having schema validation, we need to have validation hooks to integrate into Kubernetes to validate or mutate Longhorn CRs.
**Describe the solution you'd like**
Have our resources validation to hook into Kubernetes CR CRUD flow.
**Describe alternatives you've considered**
N/A
**Additional context**
- https://github.com/longhorn/longhorn/issues/2570#issuecomment-965945136
- https://github.com/rancher/webhook",1.0,"[FEATURE] Mutating/Validating admission webhook - **Is your feature request related to a problem? Please describe.**
This is part of #791, besides having schema validation, we need to have validation hooks to integrate into Kubernetes to validate or mutate Longhorn CRs.
**Describe the solution you'd like**
Have our resources validation to hook into Kubernetes CR CRUD flow.
**Describe alternatives you've considered**
N/A
**Additional context**
- https://github.com/longhorn/longhorn/issues/2570#issuecomment-965945136
- https://github.com/rancher/webhook",1, mutating validating admission webhook is your feature request related to a problem please describe this is part of besides having schema validation we need to have validation hooks to integrate into kubernetes to validate or mutate longhorn crs describe the solution you d like have our resources validation to hook into kubernetes cr crud flow describe alternatives you ve considered n a additional context ,1
145826,13162169241.0,IssuesEvent,2020-08-10 21:01:32,pivotal/cloud-service-broker,https://api.github.com/repos/pivotal/cloud-service-broker,closed,"[DOCS] How to add ""read_write_endpoint_failover_policy"" to defaults in example-configs",documentation enhancement,"## Documentation Requested
Would it be possible to add in the ""example-configs"" document an example of setting the ""read_write_endpoint_failover_policy"" as a default for the ""csb-azure-mssql-db-failover-group""?
File:
https://github.com/pivotal/cloud-service-broker/blob/master/docs/example-configs.md
Section:
Azure csb-azure-mssql-db-failover-group
Feature:
""read_write_endpoint_failover_policy"":""Manual/Automatic""
https://github.com/pivotal/cloud-service-broker/commit/0ebfac6428c268e22d394f0f7fdaa68759be87f5
",1.0,"[DOCS] How to add ""read_write_endpoint_failover_policy"" to defaults in example-configs - ## Documentation Requested
Would it be possible to add in the ""example-configs"" document an example of setting the ""read_write_endpoint_failover_policy"" as a default for the ""csb-azure-mssql-db-failover-group""?
File:
https://github.com/pivotal/cloud-service-broker/blob/master/docs/example-configs.md
Section:
Azure csb-azure-mssql-db-failover-group
Feature:
""read_write_endpoint_failover_policy"":""Manual/Automatic""
https://github.com/pivotal/cloud-service-broker/commit/0ebfac6428c268e22d394f0f7fdaa68759be87f5
",0, how to add read write endpoint failover policy to defaults in example configs documentation requested would it be possible to add in the example configs document an example of setting the read write endpoint failover policy as a default for the csb azure mssql db failover group file section azure csb azure mssql db failover group feature read write endpoint failover policy manual automatic ,0
5444,19619734604.0,IssuesEvent,2022-01-07 03:48:38,pingcap/tiflow,https://api.github.com/repos/pingcap/tiflow,closed,"v5.2.3 upgrade to v5.4.0-nightly-20211221 fail for ""server_config is only supported with TiCDC version v4.0.13 or later""",type/bug severity/minor found/automation area/ticdc,"### What did you do?
1. install v5.2.3 tidb cluster, with config:
cdc:
sorter.max-memory-consumption: 1073741824
2. upgrade to v5.4.0-nightly-20211221
3. upgrade fail
### What did you expect to see?
upgrade success
### What did you see instead?
2021-12-30T09:17:51.941+0800 INFO Execute command finished {""code"": 1, ""error"": ""init config failed: cdc-peer:8300: server_config is only supported with TiCDC version v4.0.13 or later"", ""errorVerbose"": ""server_config is only supported with TiCDC version v4.0.13 or later\ngithub.com/pingcap/tiup/pkg/cluster/spec.(*CDCInstance).InitConfig\n\tgithub.com/pingcap/tiup/pkg/cluster/spec/cdc.go:168\ngithub.com/pingcap/tiup/pkg/cluster/task.(*InitConfig).Execute\n\tgithub.com/pingcap/tiup/pkg/cluster/task/init_config.go:51\ngithub.com/pingcap/tiup/pkg/cluster/task.(*Serial).Execute\n\tgithub.com/pingcap/tiup/pkg/cluster/task/task.go:85\ngithub.com/pingcap/tiup/pkg/cluster/task.(*Parallel).Execute.func1\n\tgithub.com/pingcap/tiup/pkg/cluster/task/task.go:142\nruntime.goexit\n\truntime/asm_amd64.s:1581\ninit config failed: cdc-peer:8300""}
### Versions of the cluster
Upstream TiDB cluster version (execute `SELECT tidb_version();` in a MySQL client):
```console
(paste TiDB cluster version here)
```
TiCDC version (execute `cdc version`):
[release-version=v5.2.3] [git-hash=a04ddac9fe83c8bdff267e4c44150060ea05f5ec] [git-branch=heads/refs/tags/v5.2.3] [utc-build-time=""2021-11-26 07:39:58""] [go-version=""go version go1.16.4 linux/amd64""]
```console
(paste TiCDC version here)
```",1.0,"v5.2.3 upgrade to v5.4.0-nightly-20211221 fail for ""server_config is only supported with TiCDC version v4.0.13 or later"" - ### What did you do?
1. install v5.2.3 tidb cluster, with config:
cdc:
sorter.max-memory-consumption: 1073741824
2. upgrade to v5.4.0-nightly-20211221
3. upgrade fail
### What did you expect to see?
upgrade success
### What did you see instead?
2021-12-30T09:17:51.941+0800 INFO Execute command finished {""code"": 1, ""error"": ""init config failed: cdc-peer:8300: server_config is only supported with TiCDC version v4.0.13 or later"", ""errorVerbose"": ""server_config is only supported with TiCDC version v4.0.13 or later\ngithub.com/pingcap/tiup/pkg/cluster/spec.(*CDCInstance).InitConfig\n\tgithub.com/pingcap/tiup/pkg/cluster/spec/cdc.go:168\ngithub.com/pingcap/tiup/pkg/cluster/task.(*InitConfig).Execute\n\tgithub.com/pingcap/tiup/pkg/cluster/task/init_config.go:51\ngithub.com/pingcap/tiup/pkg/cluster/task.(*Serial).Execute\n\tgithub.com/pingcap/tiup/pkg/cluster/task/task.go:85\ngithub.com/pingcap/tiup/pkg/cluster/task.(*Parallel).Execute.func1\n\tgithub.com/pingcap/tiup/pkg/cluster/task/task.go:142\nruntime.goexit\n\truntime/asm_amd64.s:1581\ninit config failed: cdc-peer:8300""}
### Versions of the cluster
Upstream TiDB cluster version (execute `SELECT tidb_version();` in a MySQL client):
```console
(paste TiDB cluster version here)
```
TiCDC version (execute `cdc version`):
[release-version=v5.2.3] [git-hash=a04ddac9fe83c8bdff267e4c44150060ea05f5ec] [git-branch=heads/refs/tags/v5.2.3] [utc-build-time=""2021-11-26 07:39:58""] [go-version=""go version go1.16.4 linux/amd64""]
```console
(paste TiCDC version here)
```",1, upgrade to nightly fail for server config is only supported with ticdc version or later what did you do install tidb cluster with config cdc sorter max memory consumption upgrade to nightly upgrade fail what did you expect to see upgrade success what did you see instead info execute command finished code error init config failed cdc peer server config is only supported with ticdc version or later errorverbose server config is only supported with ticdc version or later ngithub com pingcap tiup pkg cluster spec cdcinstance initconfig n tgithub com pingcap tiup pkg cluster spec cdc go ngithub com pingcap tiup pkg cluster task initconfig execute n tgithub com pingcap tiup pkg cluster task init config go ngithub com pingcap tiup pkg cluster task serial execute n tgithub com pingcap tiup pkg cluster task task go ngithub com pingcap tiup pkg cluster task parallel execute n tgithub com pingcap tiup pkg cluster task task go nruntime goexit n truntime asm s ninit config failed cdc peer versions of the cluster upstream tidb cluster version execute select tidb version in a mysql client console paste tidb cluster version here ticdc version execute cdc version console paste ticdc version here ,1
61827,14641784015.0,IssuesEvent,2020-12-25 08:21:41,hackmdio/codimd,https://api.github.com/repos/hackmdio/codimd,closed,Stored XSS in mermaid,security,"Hi,
This weekend I played hxpctf, during competition there was a challenge called hackme. It was a Docker with codimd. My solution was unintended: I use google analytics to exploit a stored xss bug in mermaid.
Here is my [PoC](https://github.com/Alemmi/ctf-writeups/blob/main/hxpctf-2020/hackme/solution.md)
The bug seems to be known by the mermaid developers ([issue](https://github.com/mermaid-js/mermaid/issues/869)).
I tryed it on [hackmd.io](https://hackmd.io/) and it works, too.
Hope you can fix soon!
P.S. Now I'm going to reopen the issue in mermaid repository. This is also a duplicate, but the other issues are marked as ""solved"".
Thanks
Alessandro Mizzaro",True,"Stored XSS in mermaid - Hi,
This weekend I played hxpctf, during competition there was a challenge called hackme. It was a Docker with codimd. My solution was unintended: I use google analytics to exploit a stored xss bug in mermaid.
Here is my [PoC](https://github.com/Alemmi/ctf-writeups/blob/main/hxpctf-2020/hackme/solution.md)
The bug seems to be known by the mermaid developers ([issue](https://github.com/mermaid-js/mermaid/issues/869)).
I tryed it on [hackmd.io](https://hackmd.io/) and it works, too.
Hope you can fix soon!
P.S. Now I'm going to reopen the issue in mermaid repository. This is also a duplicate, but the other issues are marked as ""solved"".
Thanks
Alessandro Mizzaro",0,stored xss in mermaid hi this weekend i played hxpctf during competition there was a challenge called hackme it was a docker with codimd my solution was unintended i use google analytics to exploit a stored xss bug in mermaid here is my the bug seems to be known by the mermaid developers i tryed it on and it works too hope you can fix soon p s now i m going to reopen the issue in mermaid repository this is also a duplicate but the other issues are marked as solved thanks alessandro mizzaro,0
6447,23177209371.0,IssuesEvent,2022-07-31 15:46:31,keptn/keptn,https://api.github.com/repos/keptn/keptn,closed,[doc] Automation for creating documentation for a new release and versioning of release docu,doc stale research needs discussion release-automation area:devops,"## User story
When releasing a new version of Keptn, the documentation is also released based on the same release tag. As a user, I can switch between the release versions, while the latest stable version is shown by default.
*Future situation:* As we are going to change the versioning of releases by incrementing the Minor version (Major.Minor.Patch), the current documentation approach does not scale. We would have duplicate the docu for each new release.
### Details
* Using tagging / branching to create release documentation in https://github.com/keptn/keptn.github.io
* On the page, I can switch between the release docu. For example, see istio.io:

* When switching to an older release, the release version is reflected in the URL: https://istio.io/v1.9/
* Consequently, we should not show the documentation for previous releases, but rather the release docu the user has selected:

### Advantage
* By applying this approach, it becomes obsolete to duplicate the docu for each release in: https://github.com/keptn/keptn.github.io/tree/master/content/docs",1.0,"[doc] Automation for creating documentation for a new release and versioning of release docu - ## User story
When releasing a new version of Keptn, the documentation is also released based on the same release tag. As a user, I can switch between the release versions, while the latest stable version is shown by default.
*Future situation:* As we are going to change the versioning of releases by incrementing the Minor version (Major.Minor.Patch), the current documentation approach does not scale. We would have duplicate the docu for each new release.
### Details
* Using tagging / branching to create release documentation in https://github.com/keptn/keptn.github.io
* On the page, I can switch between the release docu. For example, see istio.io:

* When switching to an older release, the release version is reflected in the URL: https://istio.io/v1.9/
* Consequently, we should not show the documentation for previous releases, but rather the release docu the user has selected:

### Advantage
* By applying this approach, it becomes obsolete to duplicate the docu for each release in: https://github.com/keptn/keptn.github.io/tree/master/content/docs",1, automation for creating documentation for a new release and versioning of release docu user story when releasing a new version of keptn the documentation is also released based on the same release tag as a user i can switch between the release versions while the latest stable version is shown by default future situation as we are going to change the versioning of releases by incrementing the minor version major minor patch the current documentation approach does not scale we would have duplicate the docu for each new release details using tagging branching to create release documentation in on the page i can switch between the release docu for example see istio io when switching to an older release the release version is reflected in the url consequently we should not show the documentation for previous releases but rather the release docu the user has selected advantage by applying this approach it becomes obsolete to duplicate the docu for each release in ,1
2106,11394590259.0,IssuesEvent,2020-01-30 09:40:06,elastic/apm-integration-testing,https://api.github.com/repos/elastic/apm-integration-testing,opened,"Kibana OSS container does not start correctly - Error: Unknown configuration key(s): ""xpack.apm.serviceMapEnabled"".",automation bug,"We detected on ITs that Python is not running the test on the test that starts Kibana, the following error is shown in the Kibana logs
```
{""type"":""log"",""@timestamp"":""2020-01-30T03:18:11+00:00"",""tags"":[""warning"",""plugins-discovery""],""pid"":6,""message"":""Expect plugin \""id\"" in camelCase, but found: apm_oss""}
{""type"":""log"",""@timestamp"":""2020-01-30T03:18:11+00:00"",""tags"":[""warning"",""plugins-discovery""],""pid"":6,""message"":""Expect plugin \""id\"" in camelCase, but found: dashboard_embeddable_container""}
{""type"":""log"",""@timestamp"":""2020-01-30T03:18:11+00:00"",""tags"":[""warning"",""plugins-discovery""],""pid"":6,""message"":""Expect plugin \""id\"" in camelCase, but found: dev_tools""}
{""type"":""log"",""@timestamp"":""2020-01-30T03:18:11+00:00"",""tags"":[""warning"",""plugins-discovery""],""pid"":6,""message"":""Expect plugin \""id\"" in camelCase, but found: eui_utils""}
{""type"":""log"",""@timestamp"":""2020-01-30T03:18:11+00:00"",""tags"":[""warning"",""plugins-discovery""],""pid"":6,""message"":""Expect plugin \""id\"" in camelCase, but found: kibana_legacy""}
{""type"":""log"",""@timestamp"":""2020-01-30T03:18:11+00:00"",""tags"":[""warning"",""plugins-discovery""],""pid"":6,""message"":""Expect plugin \""id\"" in camelCase, but found: status_page""}
{""type"":""log"",""@timestamp"":""2020-01-30T03:18:12+00:00"",""tags"":[""fatal"",""root""],""pid"":6,""message"":""{ Error: Unknown configuration key(s): \""xpack.apm.serviceMapEnabled\"". Check for spelling errors and ensure that expected plugins are installed.\n at ensureValidConfiguration (/usr/share/kibana/src/core/server/legacy/config/ensure_valid_configuration.js:46:11) code: 'InvalidConfig', processExitCode: 64, cause: undefined }""}
FATAL Error: Unknown configuration key(s): ""xpack.apm.serviceMapEnabled"". Check for spelling errors and ensure that expected plugins are installed.
{""type"":""log"",""@timestamp"":""2020-01-30T03:18:16+00:00"",""tags"":[""warning"",""plugins-discovery""],""pid"":6,""message"":""Expect plugin \""id\"" in camelCase, but found: apm_oss""}
{""type"":""log"",""@timestamp"":""2020-01-30T03:18:16+00:00"",""tags"":[""warning"",""plugins-discovery""],""pid"":6,""message"":""Expect plugin \""id\"" in camelCase, but found: dashboard_embeddable_container""}
{""type"":""log"",""@timestamp"":""2020-01-30T03:18:16+00:00"",""tags"":[""warning"",""plugins-discovery""],""pid"":6,""message"":""Expect plugin \""id\"" in camelCase, but found: dev_tools""}
{""type"":""log"",""@timestamp"":""2020-01-30T03:18:16+00:00"",""tags"":[""warning"",""plugins-discovery""],""pid"":6,""message"":""Expect plugin \""id\"" in camelCase, but found: eui_utils""}
{""type"":""log"",""@timestamp"":""2020-01-30T03:18:16+00:00"",""tags"":[""warning"",""plugins-discovery""],""pid"":6,""message"":""Expect plugin \""id\"" in camelCase, but found: kibana_legacy""}
{""type"":""log"",""@timestamp"":""2020-01-30T03:18:16+00:00"",""tags"":[""warning"",""plugins-discovery""],""pid"":6,""message"":""Expect plugin \""id\"" in camelCase, but found: status_page""}
{""type"":""log"",""@timestamp"":""2020-01-30T03:18:17+00:00"",""tags"":[""fatal"",""root""],""pid"":6,""message"":""{ Error: Unknown configuration key(s): \""xpack.apm.serviceMapEnabled\"". Check for spelling errors and ensure that expected plugins are installed.\n at ensureValidConfiguration (/usr/share/kibana/src/core/server/legacy/config/ensure_valid_configuration.js:46:11) code: 'InvalidConfig', processExitCode: 64, cause: undefined }""}
FATAL Error: Unknown configuration key(s): ""xpack.apm.serviceMapEnabled"". Check for spelling errors and ensure that expected plugins are installed.
```
",1.0,"Kibana OSS container does not start correctly - Error: Unknown configuration key(s): ""xpack.apm.serviceMapEnabled"". - We detected on ITs that Python is not running the test on the test that starts Kibana, the following error is shown in the Kibana logs
```
{""type"":""log"",""@timestamp"":""2020-01-30T03:18:11+00:00"",""tags"":[""warning"",""plugins-discovery""],""pid"":6,""message"":""Expect plugin \""id\"" in camelCase, but found: apm_oss""}
{""type"":""log"",""@timestamp"":""2020-01-30T03:18:11+00:00"",""tags"":[""warning"",""plugins-discovery""],""pid"":6,""message"":""Expect plugin \""id\"" in camelCase, but found: dashboard_embeddable_container""}
{""type"":""log"",""@timestamp"":""2020-01-30T03:18:11+00:00"",""tags"":[""warning"",""plugins-discovery""],""pid"":6,""message"":""Expect plugin \""id\"" in camelCase, but found: dev_tools""}
{""type"":""log"",""@timestamp"":""2020-01-30T03:18:11+00:00"",""tags"":[""warning"",""plugins-discovery""],""pid"":6,""message"":""Expect plugin \""id\"" in camelCase, but found: eui_utils""}
{""type"":""log"",""@timestamp"":""2020-01-30T03:18:11+00:00"",""tags"":[""warning"",""plugins-discovery""],""pid"":6,""message"":""Expect plugin \""id\"" in camelCase, but found: kibana_legacy""}
{""type"":""log"",""@timestamp"":""2020-01-30T03:18:11+00:00"",""tags"":[""warning"",""plugins-discovery""],""pid"":6,""message"":""Expect plugin \""id\"" in camelCase, but found: status_page""}
{""type"":""log"",""@timestamp"":""2020-01-30T03:18:12+00:00"",""tags"":[""fatal"",""root""],""pid"":6,""message"":""{ Error: Unknown configuration key(s): \""xpack.apm.serviceMapEnabled\"". Check for spelling errors and ensure that expected plugins are installed.\n at ensureValidConfiguration (/usr/share/kibana/src/core/server/legacy/config/ensure_valid_configuration.js:46:11) code: 'InvalidConfig', processExitCode: 64, cause: undefined }""}
FATAL Error: Unknown configuration key(s): ""xpack.apm.serviceMapEnabled"". Check for spelling errors and ensure that expected plugins are installed.
{""type"":""log"",""@timestamp"":""2020-01-30T03:18:16+00:00"",""tags"":[""warning"",""plugins-discovery""],""pid"":6,""message"":""Expect plugin \""id\"" in camelCase, but found: apm_oss""}
{""type"":""log"",""@timestamp"":""2020-01-30T03:18:16+00:00"",""tags"":[""warning"",""plugins-discovery""],""pid"":6,""message"":""Expect plugin \""id\"" in camelCase, but found: dashboard_embeddable_container""}
{""type"":""log"",""@timestamp"":""2020-01-30T03:18:16+00:00"",""tags"":[""warning"",""plugins-discovery""],""pid"":6,""message"":""Expect plugin \""id\"" in camelCase, but found: dev_tools""}
{""type"":""log"",""@timestamp"":""2020-01-30T03:18:16+00:00"",""tags"":[""warning"",""plugins-discovery""],""pid"":6,""message"":""Expect plugin \""id\"" in camelCase, but found: eui_utils""}
{""type"":""log"",""@timestamp"":""2020-01-30T03:18:16+00:00"",""tags"":[""warning"",""plugins-discovery""],""pid"":6,""message"":""Expect plugin \""id\"" in camelCase, but found: kibana_legacy""}
{""type"":""log"",""@timestamp"":""2020-01-30T03:18:16+00:00"",""tags"":[""warning"",""plugins-discovery""],""pid"":6,""message"":""Expect plugin \""id\"" in camelCase, but found: status_page""}
{""type"":""log"",""@timestamp"":""2020-01-30T03:18:17+00:00"",""tags"":[""fatal"",""root""],""pid"":6,""message"":""{ Error: Unknown configuration key(s): \""xpack.apm.serviceMapEnabled\"". Check for spelling errors and ensure that expected plugins are installed.\n at ensureValidConfiguration (/usr/share/kibana/src/core/server/legacy/config/ensure_valid_configuration.js:46:11) code: 'InvalidConfig', processExitCode: 64, cause: undefined }""}
FATAL Error: Unknown configuration key(s): ""xpack.apm.serviceMapEnabled"". Check for spelling errors and ensure that expected plugins are installed.
```
",1,kibana oss container does not start correctly error unknown configuration key s xpack apm servicemapenabled we detected on its that python is not running the test on the test that starts kibana the following error is shown in the kibana logs type log timestamp tags pid message expect plugin id in camelcase but found apm oss type log timestamp tags pid message expect plugin id in camelcase but found dashboard embeddable container type log timestamp tags pid message expect plugin id in camelcase but found dev tools type log timestamp tags pid message expect plugin id in camelcase but found eui utils type log timestamp tags pid message expect plugin id in camelcase but found kibana legacy type log timestamp tags pid message expect plugin id in camelcase but found status page type log timestamp tags pid message error unknown configuration key s xpack apm servicemapenabled check for spelling errors and ensure that expected plugins are installed n at ensurevalidconfiguration usr share kibana src core server legacy config ensure valid configuration js code invalidconfig processexitcode cause undefined fatal error unknown configuration key s xpack apm servicemapenabled check for spelling errors and ensure that expected plugins are installed type log timestamp tags pid message expect plugin id in camelcase but found apm oss type log timestamp tags pid message expect plugin id in camelcase but found dashboard embeddable container type log timestamp tags pid message expect plugin id in camelcase but found dev tools type log timestamp tags pid message expect plugin id in camelcase but found eui utils type log timestamp tags pid message expect plugin id in camelcase but found kibana legacy type log timestamp tags pid message expect plugin id in camelcase but found status page type log timestamp tags pid message error unknown configuration key s xpack apm servicemapenabled check for spelling errors and ensure that expected plugins are installed n at ensurevalidconfiguration usr share kibana src core server legacy config ensure valid configuration js code invalidconfig processexitcode cause undefined fatal error unknown configuration key s xpack apm servicemapenabled check for spelling errors and ensure that expected plugins are installed ,1
10114,4007417667.0,IssuesEvent,2016-05-12 18:04:27,Shopify/javascript,https://api.github.com/repos/Shopify/javascript,closed,Remove returns from anonymous `addEventListener` handlers,new-codemod,"Vanilla and jQuery handlers treat return values very differently. Removing (completely meaningless) explicit return values from `addEventListener` handlers should discourage conflating of techniques.
Example:
```
document.addEventListener('input', event => {
if (event.target.type === 'hidden') {
event.preventDefault();
return event.stopPropagation();
}
}, true);
```",1.0,"Remove returns from anonymous `addEventListener` handlers - Vanilla and jQuery handlers treat return values very differently. Removing (completely meaningless) explicit return values from `addEventListener` handlers should discourage conflating of techniques.
Example:
```
document.addEventListener('input', event => {
if (event.target.type === 'hidden') {
event.preventDefault();
return event.stopPropagation();
}
}, true);
```",0,remove returns from anonymous addeventlistener handlers vanilla and jquery handlers treat return values very differently removing completely meaningless explicit return values from addeventlistener handlers should discourage conflating of techniques example document addeventlistener input event if event target type hidden event preventdefault return event stoppropagation true ,0
4130,15589316578.0,IssuesEvent,2021-03-18 07:52:19,MicrosoftDocs/azure-docs,https://api.github.com/repos/MicrosoftDocs/azure-docs,closed,change create to created,Pri2 automation/svc cxp doc-enhancement dsc/subsvc triaged,"Once you have create a composite resource module
change to Once you have **created** a composite resource module
---
#### Document Details
⚠ *Do not edit this section. It is required for docs.microsoft.com ➟ GitHub issue linking.*
* ID: 5d1c3be2-bb62-2e96-01eb-ee2d16c4a3c4
* Version Independent ID: 544726a4-a816-2eb3-f236-f083b618074d
* Content: [Convert configurations to composite resources for Azure Automation State Configuration](https://docs.microsoft.com/en-us/azure/automation/automation-dsc-create-composite)
* Content Source: [articles/automation/automation-dsc-create-composite.md](https://github.com/MicrosoftDocs/azure-docs/blob/master/articles/automation/automation-dsc-create-composite.md)
* Service: **automation**
* Sub-service: **dsc**
* GitHub Login: @mgreenegit
* Microsoft Alias: **migreene**",1.0,"change create to created - Once you have create a composite resource module
change to Once you have **created** a composite resource module
---
#### Document Details
⚠ *Do not edit this section. It is required for docs.microsoft.com ➟ GitHub issue linking.*
* ID: 5d1c3be2-bb62-2e96-01eb-ee2d16c4a3c4
* Version Independent ID: 544726a4-a816-2eb3-f236-f083b618074d
* Content: [Convert configurations to composite resources for Azure Automation State Configuration](https://docs.microsoft.com/en-us/azure/automation/automation-dsc-create-composite)
* Content Source: [articles/automation/automation-dsc-create-composite.md](https://github.com/MicrosoftDocs/azure-docs/blob/master/articles/automation/automation-dsc-create-composite.md)
* Service: **automation**
* Sub-service: **dsc**
* GitHub Login: @mgreenegit
* Microsoft Alias: **migreene**",1,change create to created once you have create a composite resource module change to once you have created a composite resource module document details ⚠ do not edit this section it is required for docs microsoft com ➟ github issue linking id version independent id content content source service automation sub service dsc github login mgreenegit microsoft alias migreene ,1
338237,30287838880.0,IssuesEvent,2023-07-08 22:50:15,MrMelbert/MapleStationCode,https://api.github.com/repos/MrMelbert/MapleStationCode,opened,Flaky test strange_reagent: list index out of bounds,🤖 Flaky Test Report,"
Flaky tests were detected in [this test run](https://github.com/MrMelbert/MapleStationCode/actions/runs/5496696407/attempts/1). This means that there was a failure that was cleared when the tests were simply restarted.
Failures:
```
strange_reagent: [22:35:09] Runtime in log_holder.dm,253: list index out of bounds
proc name: human readable timestamp (/datum/log_holder/proc/human_readable_timestamp)
usr: *no key*/(magic polar bear)
usr.loc: (Test Room (126,126,13))
src: /datum/log_holder (/datum/log_holder)
call stack:
/datum/log_holder (/datum/log_holder): human readable timestamp(3)
/datum/log_category/debug_mobt... (/datum/log_category/debug_mobtag): create entry(""TAG: mob_4273 CREATED: *no key..."", null, null)
/datum/log_holder (/datum/log_holder): Log(""debug-mobtag"", ""TAG: mob_4273 CREATED: *no key..."", null)
the magic polar bear (/mob/living/simple_animal/hostile/asteroid/polarbear/lesser): log mob tag(""TAG: mob_4273 CREATED: *no key..."", null)
the magic polar bear (/mob/living/simple_animal/hostile/asteroid/polarbear/lesser): Initialize(0)
the magic polar bear (/mob/living/simple_animal/hostile/asteroid/polarbear/lesser): Initialize(0)
the magic polar bear (/mob/living/simple_animal/hostile/asteroid/polarbear/lesser): Initialize(0)
the magic polar bear (/mob/living/simple_animal/hostile/asteroid/polarbear/lesser): Initialize(0)
the magic polar bear (/mob/living/simple_animal/hostile/asteroid/polarbear/lesser): Initialize(0)
Atoms (/datum/controller/subsystem/atoms): InitAtom(the magic polar bear (/mob/living/simple_animal/hostile/asteroid/polarbear/lesser), 0, /list (/list))
the magic polar bear (/mob/living/simple_animal/hostile/asteroid/polarbear/lesser): New(0)
the magic polar bear (/mob/living/simple_animal/hostile/asteroid/polarbear/lesser): New(the floor (126,126,13) (/turf/open/floor/iron))
/datum/unit_test/strange_reage... (/datum/unit_test/strange_reagent): allocate(/mob/living/simple_animal/host... (/mob/living/simple_animal/hostile/asteroid/polarbear/lesser))
/datum/unit_test/strange_reage... (/datum/unit_test/strange_reagent): allocate new target(/mob/living/simple_animal/host... (/mob/living/simple_animal/hostile/asteroid/polarbear/lesser))
/datum/unit_test/strange_reage... (/datum/unit_test/strange_reagent): Run()
RunUnitTest(/datum/unit_test/strange_reage... (/datum/unit_test/strange_reagent), /list (/list))
RunUnitTests()
/datum/callback (/datum/callback): InvokeAsync() at log_holder.dm:253
```
",1.0,"Flaky test strange_reagent: list index out of bounds -
Flaky tests were detected in [this test run](https://github.com/MrMelbert/MapleStationCode/actions/runs/5496696407/attempts/1). This means that there was a failure that was cleared when the tests were simply restarted.
Failures:
```
strange_reagent: [22:35:09] Runtime in log_holder.dm,253: list index out of bounds
proc name: human readable timestamp (/datum/log_holder/proc/human_readable_timestamp)
usr: *no key*/(magic polar bear)
usr.loc: (Test Room (126,126,13))
src: /datum/log_holder (/datum/log_holder)
call stack:
/datum/log_holder (/datum/log_holder): human readable timestamp(3)
/datum/log_category/debug_mobt... (/datum/log_category/debug_mobtag): create entry(""TAG: mob_4273 CREATED: *no key..."", null, null)
/datum/log_holder (/datum/log_holder): Log(""debug-mobtag"", ""TAG: mob_4273 CREATED: *no key..."", null)
the magic polar bear (/mob/living/simple_animal/hostile/asteroid/polarbear/lesser): log mob tag(""TAG: mob_4273 CREATED: *no key..."", null)
the magic polar bear (/mob/living/simple_animal/hostile/asteroid/polarbear/lesser): Initialize(0)
the magic polar bear (/mob/living/simple_animal/hostile/asteroid/polarbear/lesser): Initialize(0)
the magic polar bear (/mob/living/simple_animal/hostile/asteroid/polarbear/lesser): Initialize(0)
the magic polar bear (/mob/living/simple_animal/hostile/asteroid/polarbear/lesser): Initialize(0)
the magic polar bear (/mob/living/simple_animal/hostile/asteroid/polarbear/lesser): Initialize(0)
Atoms (/datum/controller/subsystem/atoms): InitAtom(the magic polar bear (/mob/living/simple_animal/hostile/asteroid/polarbear/lesser), 0, /list (/list))
the magic polar bear (/mob/living/simple_animal/hostile/asteroid/polarbear/lesser): New(0)
the magic polar bear (/mob/living/simple_animal/hostile/asteroid/polarbear/lesser): New(the floor (126,126,13) (/turf/open/floor/iron))
/datum/unit_test/strange_reage... (/datum/unit_test/strange_reagent): allocate(/mob/living/simple_animal/host... (/mob/living/simple_animal/hostile/asteroid/polarbear/lesser))
/datum/unit_test/strange_reage... (/datum/unit_test/strange_reagent): allocate new target(/mob/living/simple_animal/host... (/mob/living/simple_animal/hostile/asteroid/polarbear/lesser))
/datum/unit_test/strange_reage... (/datum/unit_test/strange_reagent): Run()
RunUnitTest(/datum/unit_test/strange_reage... (/datum/unit_test/strange_reagent), /list (/list))
RunUnitTests()
/datum/callback (/datum/callback): InvokeAsync() at log_holder.dm:253
```
",0,flaky test strange reagent list index out of bounds flaky tests were detected in this means that there was a failure that was cleared when the tests were simply restarted failures strange reagent runtime in log holder dm list index out of bounds proc name human readable timestamp datum log holder proc human readable timestamp usr no key magic polar bear usr loc test room src datum log holder datum log holder call stack datum log holder datum log holder human readable timestamp datum log category debug mobt datum log category debug mobtag create entry tag mob created no key null null datum log holder datum log holder log debug mobtag tag mob created no key null the magic polar bear mob living simple animal hostile asteroid polarbear lesser log mob tag tag mob created no key null the magic polar bear mob living simple animal hostile asteroid polarbear lesser initialize the magic polar bear mob living simple animal hostile asteroid polarbear lesser initialize the magic polar bear mob living simple animal hostile asteroid polarbear lesser initialize the magic polar bear mob living simple animal hostile asteroid polarbear lesser initialize the magic polar bear mob living simple animal hostile asteroid polarbear lesser initialize atoms datum controller subsystem atoms initatom the magic polar bear mob living simple animal hostile asteroid polarbear lesser list list the magic polar bear mob living simple animal hostile asteroid polarbear lesser new the magic polar bear mob living simple animal hostile asteroid polarbear lesser new the floor turf open floor iron datum unit test strange reage datum unit test strange reagent allocate mob living simple animal host mob living simple animal hostile asteroid polarbear lesser datum unit test strange reage datum unit test strange reagent allocate new target mob living simple animal host mob living simple animal hostile asteroid polarbear lesser datum unit test strange reage datum unit test strange reagent run rununittest datum unit test strange reage datum unit test strange reagent list list rununittests datum callback datum callback invokeasync at log holder dm ,0
260583,19676666561.0,IssuesEvent,2022-01-11 13:04:49,systemd/systemd,https://api.github.com/repos/systemd/systemd,closed,[networkd] description of UseDNS= / UseDomains=,network documentation needs-reporter-feedback ❓,"Systemd version: 238.133
Distro: Archlinux
From manual page: `When true (the default), the DNS servers received from the DHCP server will be used and take precedence over any statically configured ones.`
What I observed is that both sections simply get merged, without any particular preferences (formally DNS= listed DNSes are before dynamically obtained ones). Same applies to Domains= and UseDomains= - static and dynamic are simply merged.
_Probably_ the same applies to NTP= / UseNTP= pair - but I haven't checked this yet.
The behavior is as usual - the relevant DNSes are tried in turn (in case some of servers fail, tested with simple iptables rule)",1.0,"[networkd] description of UseDNS= / UseDomains= - Systemd version: 238.133
Distro: Archlinux
From manual page: `When true (the default), the DNS servers received from the DHCP server will be used and take precedence over any statically configured ones.`
What I observed is that both sections simply get merged, without any particular preferences (formally DNS= listed DNSes are before dynamically obtained ones). Same applies to Domains= and UseDomains= - static and dynamic are simply merged.
_Probably_ the same applies to NTP= / UseNTP= pair - but I haven't checked this yet.
The behavior is as usual - the relevant DNSes are tried in turn (in case some of servers fail, tested with simple iptables rule)",0, description of usedns usedomains systemd version distro archlinux from manual page when true the default the dns servers received from the dhcp server will be used and take precedence over any statically configured ones what i observed is that both sections simply get merged without any particular preferences formally dns listed dnses are before dynamically obtained ones same applies to domains and usedomains static and dynamic are simply merged probably the same applies to ntp usentp pair but i haven t checked this yet the behavior is as usual the relevant dnses are tried in turn in case some of servers fail tested with simple iptables rule ,0
2755,12541172020.0,IssuesEvent,2020-06-05 11:49:36,input-output-hk/cardano-node,https://api.github.com/repos/input-output-hk/cardano-node,opened,[QA] - Min tx fees,e2e automation,"- check the min tx fees for different transaction types: 1-to-1, 1-to-10, 10-to-1, 10-to-10, 100-to-100 + different types of certificates
- the scope of this test is to check that the tx fee remains constant between builds",1.0,"[QA] - Min tx fees - - check the min tx fees for different transaction types: 1-to-1, 1-to-10, 10-to-1, 10-to-10, 100-to-100 + different types of certificates
- the scope of this test is to check that the tx fee remains constant between builds",1, min tx fees check the min tx fees for different transaction types to to to to to different types of certificates the scope of this test is to check that the tx fee remains constant between builds,1
221748,7395831014.0,IssuesEvent,2018-03-18 03:33:42,langbakk/HSS,https://api.github.com/repos/langbakk/HSS,closed,"Bug: leaving the bathroom, leaving a pair of panties, they might not be there on return",bug priority 2,"For some reason, it seems like panties aren't stored when leaving and reentering the bathroom",1.0,"Bug: leaving the bathroom, leaving a pair of panties, they might not be there on return - For some reason, it seems like panties aren't stored when leaving and reentering the bathroom",0,bug leaving the bathroom leaving a pair of panties they might not be there on return for some reason it seems like panties aren t stored when leaving and reentering the bathroom,0
766148,26873336335.0,IssuesEvent,2023-02-04 18:45:00,belav/csharpier,https://api.github.com/repos/belav/csharpier,closed,csharpier-repos has files that encoding detection fails on,type:bug priority:low,"The code below returns null for encoding on a few files. It does this even after they are written out with UTF8
```c#
var detectionResult = CharsetDetector.DetectFromFile(file);
var encoding = detectionResult?.Detected?.Encoding;
```
The files are from the aspnetcore repository -
/aspnetcore/src/Shared/test/Shared.Tests/UrlDecoderTests.cs
/aspnetcore/src/Razor/Microsoft.AspNetCore.Razor.Language/src/BoundAttributeDescriptorComparer.cs
/aspnetcore/src/Razor/Microsoft.AspNetCore.Razor.Language/src/TagHelperDescriptorComparer.cs
/aspnetcore/src/Servers/Kestrel/shared/KnownHeaders.cs",1.0,"csharpier-repos has files that encoding detection fails on - The code below returns null for encoding on a few files. It does this even after they are written out with UTF8
```c#
var detectionResult = CharsetDetector.DetectFromFile(file);
var encoding = detectionResult?.Detected?.Encoding;
```
The files are from the aspnetcore repository -
/aspnetcore/src/Shared/test/Shared.Tests/UrlDecoderTests.cs
/aspnetcore/src/Razor/Microsoft.AspNetCore.Razor.Language/src/BoundAttributeDescriptorComparer.cs
/aspnetcore/src/Razor/Microsoft.AspNetCore.Razor.Language/src/TagHelperDescriptorComparer.cs
/aspnetcore/src/Servers/Kestrel/shared/KnownHeaders.cs",0,csharpier repos has files that encoding detection fails on the code below returns null for encoding on a few files it does this even after they are written out with c var detectionresult charsetdetector detectfromfile file var encoding detectionresult detected encoding the files are from the aspnetcore repository aspnetcore src shared test shared tests urldecodertests cs aspnetcore src razor microsoft aspnetcore razor language src boundattributedescriptorcomparer cs aspnetcore src razor microsoft aspnetcore razor language src taghelperdescriptorcomparer cs aspnetcore src servers kestrel shared knownheaders cs,0
11092,13112317490.0,IssuesEvent,2020-08-05 01:49:46,eirannejad/pyRevit,https://api.github.com/repos/eirannejad/pyRevit,closed,pyrevit installation error,Installer Misc Compatibility,"Hi, I downloaded the 'pyRevit_4.7.4_signed' from GitHub and failed to intall it. Please see attached pics and log. Please let me know why is that.





[Microsoft_.NET_Core_Runtime_-_2.0.7_(x64)_20200530195844.log](https://github.com/eirannejad/pyRevit/files/4705595/Microsoft_.NET_Core_Runtime_-_2.0.7_.x64._20200530195844.log)
",True,"pyrevit installation error - Hi, I downloaded the 'pyRevit_4.7.4_signed' from GitHub and failed to intall it. Please see attached pics and log. Please let me know why is that.





[Microsoft_.NET_Core_Runtime_-_2.0.7_(x64)_20200530195844.log](https://github.com/eirannejad/pyRevit/files/4705595/Microsoft_.NET_Core_Runtime_-_2.0.7_.x64._20200530195844.log)
",0,pyrevit installation error hi i downloaded the pyrevit signed from github and failed to intall it please see attached pics and log please let me know why is that ,0
645418,21004358219.0,IssuesEvent,2022-03-29 20:48:25,status-im/status-desktop,https://api.github.com/repos/status-im/status-desktop,closed,Public chat button should lose its highlight state when its context menu was closed,bug ui Chat priority 4: minor,"This is what the button looks like after I've closed the menu (without doing anything):

It stays active, even though the menu is closed. It should become inactive as well.
",1.0,"Public chat button should lose its highlight state when its context menu was closed - This is what the button looks like after I've closed the menu (without doing anything):

It stays active, even though the menu is closed. It should become inactive as well.
",0,public chat button should lose its highlight state when its context menu was closed this is what the button looks like after i ve closed the menu without doing anything it stays active even though the menu is closed it should become inactive as well ,0
3690,14353545766.0,IssuesEvent,2020-11-30 07:06:20,dfernandezm/moneycol,https://api.github.com/repos/dfernandezm/moneycol,opened,CloudRun with VPC GKE connector,automation backend myiac,"- Leave in GKE only ElasticSearch and stateful data (batch, etc.)
- Use CloudRun for compute (server, FE, collections",1.0,"CloudRun with VPC GKE connector - - Leave in GKE only ElasticSearch and stateful data (batch, etc.)
- Use CloudRun for compute (server, FE, collections",1,cloudrun with vpc gke connector leave in gke only elasticsearch and stateful data batch etc use cloudrun for compute server fe collections,1
7433,24871455688.0,IssuesEvent,2022-10-27 15:30:28,Azure/azure-cli,https://api.github.com/repos/Azure/azure-cli,closed,Set runbook to draft state,question Automation customer-reported needs-author-feedback no-recent-activity CXP Attention Auto-Assign,"Is there no way to set an existing runbook to draft state? I cant work how to replace content ""az automation runbook replace-content"" without manually setting the runbook to draft in the Azure Portal
---
#### Document Details
⚠ *Do not edit this section. It is required for learn.microsoft.com ➟ GitHub issue linking.*
* ID: cbb98a7a-2739-73ec-f20b-03367f919215
* Version Independent ID: 3125cea7-248b-9ba0-f304-248b2c766edc
* Content: [az automation runbook](https://learn.microsoft.com/en-us/cli/azure/automation/runbook?view=azure-cli-latest)
* Content Source: [latest/docs-ref-autogen/automation/runbook.yml](https://github.com/MicrosoftDocs/azure-docs-cli/blob/main/latest/docs-ref-autogen/automation/runbook.yml)
* GitHub Login: @rloutlaw
* Microsoft Alias: **routlaw**",1.0,"Set runbook to draft state - Is there no way to set an existing runbook to draft state? I cant work how to replace content ""az automation runbook replace-content"" without manually setting the runbook to draft in the Azure Portal
---
#### Document Details
⚠ *Do not edit this section. It is required for learn.microsoft.com ➟ GitHub issue linking.*
* ID: cbb98a7a-2739-73ec-f20b-03367f919215
* Version Independent ID: 3125cea7-248b-9ba0-f304-248b2c766edc
* Content: [az automation runbook](https://learn.microsoft.com/en-us/cli/azure/automation/runbook?view=azure-cli-latest)
* Content Source: [latest/docs-ref-autogen/automation/runbook.yml](https://github.com/MicrosoftDocs/azure-docs-cli/blob/main/latest/docs-ref-autogen/automation/runbook.yml)
* GitHub Login: @rloutlaw
* Microsoft Alias: **routlaw**",1,set runbook to draft state is there no way to set an existing runbook to draft state i cant work how to replace content az automation runbook replace content without manually setting the runbook to draft in the azure portal document details ⚠ do not edit this section it is required for learn microsoft com ➟ github issue linking id version independent id content content source github login rloutlaw microsoft alias routlaw ,1
275130,23893552269.0,IssuesEvent,2022-09-08 13:18:13,ARUP-CAS/aiscr-webamcr,https://api.github.com/repos/ARUP-CAS/aiscr-webamcr,closed,Vícestupňové hesláře - revize,bug / maintanance TESTED,"Není implementováno správné chování vícestupňových heslářů (oddělovače v seznamech - funguje dobře např. pro typ akce, ale na mnoha jiných místech nikoli)",1.0,"Vícestupňové hesláře - revize - Není implementováno správné chování vícestupňových heslářů (oddělovače v seznamech - funguje dobře např. pro typ akce, ale na mnoha jiných místech nikoli)",0,vícestupňové hesláře revize není implementováno správné chování vícestupňových heslářů oddělovače v seznamech funguje dobře např pro typ akce ale na mnoha jiných místech nikoli ,0
4275,15930745478.0,IssuesEvent,2021-04-14 01:39:13,longhorn/longhorn,https://api.github.com/repos/longhorn/longhorn,opened,[e2e] add integration test for Volumes used by MinIO workloads cannot be attached with state Degraded,require/automation-e2e,Ref: https://github.com/longhorn/longhorn/issues/2073,1.0,[e2e] add integration test for Volumes used by MinIO workloads cannot be attached with state Degraded - Ref: https://github.com/longhorn/longhorn/issues/2073,1, add integration test for volumes used by minio workloads cannot be attached with state degraded ref ,1
366230,25573759280.0,IssuesEvent,2022-11-30 20:03:30,lieion/OpenSource_Final_Project,https://api.github.com/repos/lieion/OpenSource_Final_Project,closed,현재 진행 상황 ,documentation,"
Template name
김건우 - 진행 상황
v 0.1.0
새로 추가할 기능, 페이지, 파일에 대한 suggest
Template Content
About
**현재 상황 안내**
1. 회원가입 시스템 구현 (express.js) (완)
2. 로그인 시스템 구현 (완)
3. 주문 내역 구현 중 (11/26 예정)
4. myPage 추가 (11/26 예정)
5. navbar issue 관련 수정 #6 (진행 중)
아마도 시간 상 로컬 server.js의 array 형태로 데이터베이스를 대체 (13강 chat 저장 방식 처럼)
회원 가입 요청에는 form 형태를 사용해서 사용자가 기입한 내용을 server로 전달 후 array에 저장.
**Additional Info**
`npm install body-parser`
`npm install express -save`
필요
로그인 후 받는 토큰으로 인해서 충돌이 일어날 수 있음에 유의
",1.0,"현재 진행 상황 -
Template name
김건우 - 진행 상황
v 0.1.0
새로 추가할 기능, 페이지, 파일에 대한 suggest
Template Content
About
**현재 상황 안내**
1. 회원가입 시스템 구현 (express.js) (완)
2. 로그인 시스템 구현 (완)
3. 주문 내역 구현 중 (11/26 예정)
4. myPage 추가 (11/26 예정)
5. navbar issue 관련 수정 #6 (진행 중)
아마도 시간 상 로컬 server.js의 array 형태로 데이터베이스를 대체 (13강 chat 저장 방식 처럼)
회원 가입 요청에는 form 형태를 사용해서 사용자가 기입한 내용을 server로 전달 후 array에 저장.
**Additional Info**
`npm install body-parser`
`npm install express -save`
필요
로그인 후 받는 토큰으로 인해서 충돌이 일어날 수 있음에 유의
",0,현재 진행 상황 template name 김건우 진행 상황 v 새로 추가할 기능 페이지 파일에 대한 suggest template content about 현재 상황 안내 회원가입 시스템 구현 express js 완 로그인 시스템 구현 완 주문 내역 구현 중 예정 mypage 추가 예정 navbar issue 관련 수정 진행 중 아마도 시간 상 로컬 server js의 array 형태로 데이터베이스를 대체 chat 저장 방식 처럼 회원 가입 요청에는 form 형태를 사용해서 사용자가 기입한 내용을 server로 전달 후 array에 저장 additional info npm install body parser npm install express save 필요 로그인 후 받는 토큰으로 인해서 충돌이 일어날 수 있음에 유의 ,0
761329,26676623435.0,IssuesEvent,2023-01-26 14:43:39,eclipse-sirius/sirius-components,https://api.github.com/repos/eclipse-sirius/sirius-components,opened,Upload document does not work properly if a special resource factory is needed,type: bug difficulty: starter priority: low package: core,"* [X] **I have checked that this bug has not yet been reported by someone else**
* [X] **I have checked that this bug appears on Chrome**
* [X] **I have specified the version** : latest
* [X] **I have specified my environment** : All
### Actual behavior
For example UML resource need special resource factory to instanciate and resolve the proxies. If the factory needed to resolved the pathmap protocol is not present, then it fails.
### Steps to reproduce
no reproducible scenario in Sirius-Web yet.
### Expected behavior
The registered special resrouce factory should be present on the resourceSet used to instanciate the uploaded resourceSet
",1.0,"Upload document does not work properly if a special resource factory is needed - * [X] **I have checked that this bug has not yet been reported by someone else**
* [X] **I have checked that this bug appears on Chrome**
* [X] **I have specified the version** : latest
* [X] **I have specified my environment** : All
### Actual behavior
For example UML resource need special resource factory to instanciate and resolve the proxies. If the factory needed to resolved the pathmap protocol is not present, then it fails.
### Steps to reproduce
no reproducible scenario in Sirius-Web yet.
### Expected behavior
The registered special resrouce factory should be present on the resourceSet used to instanciate the uploaded resourceSet
",0,upload document does not work properly if a special resource factory is needed i have checked that this bug has not yet been reported by someone else i have checked that this bug appears on chrome i have specified the version latest i have specified my environment all actual behavior for example uml resource need special resource factory to instanciate and resolve the proxies if the factory needed to resolved the pathmap protocol is not present then it fails steps to reproduce no reproducible scenario in sirius web yet expected behavior the registered special resrouce factory should be present on the resourceset used to instanciate the uploaded resourceset ,0
238892,7784187614.0,IssuesEvent,2018-06-06 12:32:35,umple/umple,https://api.github.com/repos/umple/umple,closed,Document unspecified in the user manual,Component-UserDocs Diffic-Easy Priority-Medium,"In the state machine section of the user manual, describe the use of the 'unspecified' event to handle situations where a message is received that is not understood. Note that this can be used in regular and queued state machines. Come up with a realistic example",1.0,"Document unspecified in the user manual - In the state machine section of the user manual, describe the use of the 'unspecified' event to handle situations where a message is received that is not understood. Note that this can be used in regular and queued state machines. Come up with a realistic example",0,document unspecified in the user manual in the state machine section of the user manual describe the use of the unspecified event to handle situations where a message is received that is not understood note that this can be used in regular and queued state machines come up with a realistic example,0
7233,24490276542.0,IssuesEvent,2022-10-10 00:12:18,astropy/astropy,https://api.github.com/repos/astropy/astropy,closed,Have good commit messages with commitizen,Feature Request needs-discussion dev-automation,"### Description
https://commitizen-tools.github.io/commitizen/ + pre-commit to enforce good commit messages.
### Additional context
""Commitizen is a tool designed for teams.
Its main purpose is to define a standard way of committing rules and communicating it (using the cli provided by commitizen).
The reasoning behind it is that it is easier to read, and enforces writing descriptive commits.
Besides that, having a convention on your commits makes it possible to parse them and use them for something else, like generating automatically the version or a changelog.""
(https://commitizen-tools.github.io/commitizen/)",1.0,"Have good commit messages with commitizen - ### Description
https://commitizen-tools.github.io/commitizen/ + pre-commit to enforce good commit messages.
### Additional context
""Commitizen is a tool designed for teams.
Its main purpose is to define a standard way of committing rules and communicating it (using the cli provided by commitizen).
The reasoning behind it is that it is easier to read, and enforces writing descriptive commits.
Besides that, having a convention on your commits makes it possible to parse them and use them for something else, like generating automatically the version or a changelog.""
(https://commitizen-tools.github.io/commitizen/)",1,have good commit messages with commitizen description pre commit to enforce good commit messages additional context commitizen is a tool designed for teams its main purpose is to define a standard way of committing rules and communicating it using the cli provided by commitizen the reasoning behind it is that it is easier to read and enforces writing descriptive commits besides that having a convention on your commits makes it possible to parse them and use them for something else like generating automatically the version or a changelog ,1
6725,7745110548.0,IssuesEvent,2018-05-29 17:16:09,aws/aws-sdk-ruby,https://api.github.com/repos/aws/aws-sdk-ruby,closed,run_instances with tag_specifications field with resource_type: spot-instances-request not supported!,closing-soon-if-no-response service api,"
Hi,
Calling Aws::EC2::Client run_instances method with tag_specification resource type ""spot-instances-request""
stack the following:
```
[type:error] [error_type:Aws::EC2::Errors::InvalidParameterValue][message:""'spot-instances-request' is not a valid taggable resource type for this operation.""]
/app/vendor/bundle/ruby/2.3.0/gems/aws-sdk-core-2.10.125/lib/seahorse/client/plugins/raise_response_errors.rb:15:in `call'
/app/vendor/bundle/ruby/2.3.0/gems/aws-sdk-core-2.10.125/lib/aws-sdk-core/plugins/jsonvalue_converter.rb:20:in `call'
/app/vendor/bundle/ruby/2.3.0/gems/aws-sdk-core-2.10.125/lib/aws-sdk-core/plugins/idempotency_token.rb:18:in `call'
/app/vendor/bundle/ruby/2.3.0/gems/aws-sdk-core-2.10.125/lib/aws-sdk-core/plugins/param_converter.rb:20:in `call'
/app/vendor/bundle/ruby/2.3.0/gems/aws-sdk-core-2.10.125/lib/seahorse/client/plugins/response_target.rb:21:in `call'
/app/vendor/bundle/ruby/2.3.0/gems/aws-sdk-core-2.10.125/lib/seahorse/client/request.rb:70:in `send_request'
/app/vendor/bundle/ruby/2.3.0/gems/aws-sdk-core-2.10.125/lib/seahorse/client/base.rb:207:in `block (2 levels) in define_operation_methods'
```
but in the [documentation](https://docs.aws.amazon.com/sdkforruby/api/Aws/EC2/Client.html) the support type are: ""accepts customer-gateway, dhcp-options, image, instance, internet-gateway, network-acl, network-interface, reserved-instances, route-table, snapshot, **spot-instances-request**, subnet, security-group, volume, vpc, vpn-connection, vpn-gateway"".
aws-sdk-core (= 2.11.4)
ruby:2.3
ubuntu 16.04
example call:
```
client.run_instances(
...
tag_specifications: [{
resource_type: 'spot-instances-request',
tags: {
name: name
}
}],
instance_market_options: {
market_type: ""spot"",
spot_options: {
spot_instance_type: ""one-time"",
instance_interruption_behavior: ""terminate"",
},
}
)
```",1.0,"run_instances with tag_specifications field with resource_type: spot-instances-request not supported! -
Hi,
Calling Aws::EC2::Client run_instances method with tag_specification resource type ""spot-instances-request""
stack the following:
```
[type:error] [error_type:Aws::EC2::Errors::InvalidParameterValue][message:""'spot-instances-request' is not a valid taggable resource type for this operation.""]
/app/vendor/bundle/ruby/2.3.0/gems/aws-sdk-core-2.10.125/lib/seahorse/client/plugins/raise_response_errors.rb:15:in `call'
/app/vendor/bundle/ruby/2.3.0/gems/aws-sdk-core-2.10.125/lib/aws-sdk-core/plugins/jsonvalue_converter.rb:20:in `call'
/app/vendor/bundle/ruby/2.3.0/gems/aws-sdk-core-2.10.125/lib/aws-sdk-core/plugins/idempotency_token.rb:18:in `call'
/app/vendor/bundle/ruby/2.3.0/gems/aws-sdk-core-2.10.125/lib/aws-sdk-core/plugins/param_converter.rb:20:in `call'
/app/vendor/bundle/ruby/2.3.0/gems/aws-sdk-core-2.10.125/lib/seahorse/client/plugins/response_target.rb:21:in `call'
/app/vendor/bundle/ruby/2.3.0/gems/aws-sdk-core-2.10.125/lib/seahorse/client/request.rb:70:in `send_request'
/app/vendor/bundle/ruby/2.3.0/gems/aws-sdk-core-2.10.125/lib/seahorse/client/base.rb:207:in `block (2 levels) in define_operation_methods'
```
but in the [documentation](https://docs.aws.amazon.com/sdkforruby/api/Aws/EC2/Client.html) the support type are: ""accepts customer-gateway, dhcp-options, image, instance, internet-gateway, network-acl, network-interface, reserved-instances, route-table, snapshot, **spot-instances-request**, subnet, security-group, volume, vpc, vpn-connection, vpn-gateway"".
aws-sdk-core (= 2.11.4)
ruby:2.3
ubuntu 16.04
example call:
```
client.run_instances(
...
tag_specifications: [{
resource_type: 'spot-instances-request',
tags: {
name: name
}
}],
instance_market_options: {
market_type: ""spot"",
spot_options: {
spot_instance_type: ""one-time"",
instance_interruption_behavior: ""terminate"",
},
}
)
```",0,run instances with tag specifications field with resource type spot instances request not supported hi calling aws client run instances method with tag specification resource type spot instances request stack the following app vendor bundle ruby gems aws sdk core lib seahorse client plugins raise response errors rb in call app vendor bundle ruby gems aws sdk core lib aws sdk core plugins jsonvalue converter rb in call app vendor bundle ruby gems aws sdk core lib aws sdk core plugins idempotency token rb in call app vendor bundle ruby gems aws sdk core lib aws sdk core plugins param converter rb in call app vendor bundle ruby gems aws sdk core lib seahorse client plugins response target rb in call app vendor bundle ruby gems aws sdk core lib seahorse client request rb in send request app vendor bundle ruby gems aws sdk core lib seahorse client base rb in block levels in define operation methods but in the the support type are accepts customer gateway dhcp options image instance internet gateway network acl network interface reserved instances route table snapshot spot instances request subnet security group volume vpc vpn connection vpn gateway aws sdk core ruby ubuntu example call client run instances tag specifications resource type spot instances request tags name name instance market options market type spot spot options spot instance type one time instance interruption behavior terminate ,0
6610,23515552187.0,IssuesEvent,2022-08-18 20:56:28,o3de/o3de,https://api.github.com/repos/o3de/o3de,opened,PhysX Fixed Joint Component returns a memory access violation when getting its Component Property Tree with types,kind/bug needs-triage kind/automation sig/simulation,"**Describe the bug**
When attempting to get the **Component Property Tree** from a **PhysX Fixed Joint Component** a memory access violation is returned
**Steps to reproduce**
Steps to reproduce the behavior:
1. Create a Python Editor Test that makes a call to get the **Component Property Tree** from the **PhysX Fixed Joint Component**.
```
test_entity = EditorEntity.create_editor_entity(""Test"")
test_component = test_entity.add_component(""PhysX Fixed Joint"")
print(test_component.get_property_type_visibility())
```
or
```
test_entity = hydra.Entity(""test"")
entity.create_entity(position, [""PhysX Fixed Joint""])
component = test_entity.components[0]
print(hydra.get_property_tree(component)
```
2. Run automation
**Expected behavior**
A property tree with paths is returned and printed to the stream
**Actual behavior**
A Read Access Memory exception is returned
**Callstack**
```
```
",1.0,"PhysX Fixed Joint Component returns a memory access violation when getting its Component Property Tree with types - **Describe the bug**
When attempting to get the **Component Property Tree** from a **PhysX Fixed Joint Component** a memory access violation is returned
**Steps to reproduce**
Steps to reproduce the behavior:
1. Create a Python Editor Test that makes a call to get the **Component Property Tree** from the **PhysX Fixed Joint Component**.
```
test_entity = EditorEntity.create_editor_entity(""Test"")
test_component = test_entity.add_component(""PhysX Fixed Joint"")
print(test_component.get_property_type_visibility())
```
or
```
test_entity = hydra.Entity(""test"")
entity.create_entity(position, [""PhysX Fixed Joint""])
component = test_entity.components[0]
print(hydra.get_property_tree(component)
```
2. Run automation
**Expected behavior**
A property tree with paths is returned and printed to the stream
**Actual behavior**
A Read Access Memory exception is returned
**Callstack**
```
```
",1,physx fixed joint component returns a memory access violation when getting its component property tree with types describe the bug when attempting to get the component property tree from a physx fixed joint component a memory access violation is returned steps to reproduce steps to reproduce the behavior create a python editor test that makes a call to get the component property tree from the physx fixed joint component test entity editorentity create editor entity test test component test entity add component physx fixed joint print test component get property type visibility or test entity hydra entity test entity create entity position component test entity components print hydra get property tree component run automation expected behavior a property tree with paths is returned and printed to the stream actual behavior a read access memory exception is returned callstack ,1
29267,11738897192.0,IssuesEvent,2020-03-11 16:48:00,QubesOS/qubes-issues,https://api.github.com/repos/QubesOS/qubes-issues,reopened,Mount /rw and /home with nosuid + nodev,C: templates P: default T: enhancement security,"**The problem you're addressing (if any)**
When a template has been configured to enforce internal user permissions, malware that gains a temporarily useful privilege escalation may continue as root user indefinitely in AppVMs by setting up executables in /home that have +s SUID bit set. The effect is that an OS patch for the initial vulnerability will not de-privilege malware that exploited it.
Similarly, the ability to create device node files in /home can permit privilege escalation, and such nodes normally don't belong in /home.
**Describe the solution you'd like**
Change the /rw and /home entries in /etc/fstab to use the `nosuid` and `nodev` options. This works even with bind mounts.
**Where is the value to a user, and who might that user be?**
Users who do not want malware to persist indefinitely or easily gain root privileges may remove the 'qubes-core-agent-passwordless-root' package, or reconfigure templates according to the 'vm-sudo' doc or Qubes-VM-hardening.
Mounting /rw and /home with `nosuid` + `nodev` bolsters security in such template configurations by giving OS security patches a chance to de-privilege malware.
**Relevant [documentation](https://www.qubes-os.org/doc/) you've consulted**
https://www.qubes-os.org/doc/vm-sudo/
https://github.com/tasket/Qubes-VM-hardening
",True,"Mount /rw and /home with nosuid + nodev - **The problem you're addressing (if any)**
When a template has been configured to enforce internal user permissions, malware that gains a temporarily useful privilege escalation may continue as root user indefinitely in AppVMs by setting up executables in /home that have +s SUID bit set. The effect is that an OS patch for the initial vulnerability will not de-privilege malware that exploited it.
Similarly, the ability to create device node files in /home can permit privilege escalation, and such nodes normally don't belong in /home.
**Describe the solution you'd like**
Change the /rw and /home entries in /etc/fstab to use the `nosuid` and `nodev` options. This works even with bind mounts.
**Where is the value to a user, and who might that user be?**
Users who do not want malware to persist indefinitely or easily gain root privileges may remove the 'qubes-core-agent-passwordless-root' package, or reconfigure templates according to the 'vm-sudo' doc or Qubes-VM-hardening.
Mounting /rw and /home with `nosuid` + `nodev` bolsters security in such template configurations by giving OS security patches a chance to de-privilege malware.
**Relevant [documentation](https://www.qubes-os.org/doc/) you've consulted**
https://www.qubes-os.org/doc/vm-sudo/
https://github.com/tasket/Qubes-VM-hardening
",0,mount rw and home with nosuid nodev the problem you re addressing if any when a template has been configured to enforce internal user permissions malware that gains a temporarily useful privilege escalation may continue as root user indefinitely in appvms by setting up executables in home that have s suid bit set the effect is that an os patch for the initial vulnerability will not de privilege malware that exploited it similarly the ability to create device node files in home can permit privilege escalation and such nodes normally don t belong in home describe the solution you d like change the rw and home entries in etc fstab to use the nosuid and nodev options this works even with bind mounts where is the value to a user and who might that user be users who do not want malware to persist indefinitely or easily gain root privileges may remove the qubes core agent passwordless root package or reconfigure templates according to the vm sudo doc or qubes vm hardening mounting rw and home with nosuid nodev bolsters security in such template configurations by giving os security patches a chance to de privilege malware relevant you ve consulted ,0
10065,7078137500.0,IssuesEvent,2018-01-10 01:48:41,tensorflow/tensorflow,https://api.github.com/repos/tensorflow/tensorflow,closed,Cannot show stderr when using Jupyter,type:bug/performance,"Hello,
Could you please have a look about this.
I am using TF and Jupyter. But what makes me confuse is that the log text cannot be shown in Jupyter output cell (but it output correctly in ipython).
I think it is because of the stderr. This issue have been discussed before in #3047. You add several lines to determine whether or not current context is in an interactive environment.
However, even if I use Jupyter, the return value of ""sys.flags.interactive"" is still zero. and the logger lever can never be setted to ""info"" and use ""stdout"" instead of ""stderr"".
Thanks a lot!",True,"Cannot show stderr when using Jupyter - Hello,
Could you please have a look about this.
I am using TF and Jupyter. But what makes me confuse is that the log text cannot be shown in Jupyter output cell (but it output correctly in ipython).
I think it is because of the stderr. This issue have been discussed before in #3047. You add several lines to determine whether or not current context is in an interactive environment.
However, even if I use Jupyter, the return value of ""sys.flags.interactive"" is still zero. and the logger lever can never be setted to ""info"" and use ""stdout"" instead of ""stderr"".
Thanks a lot!",0,cannot show stderr when using jupyter hello could you please have a look about this i am using tf and jupyter but what makes me confuse is that the log text cannot be shown in jupyter output cell but it output correctly in ipython i think it is because of the stderr this issue have been discussed before in you add several lines to determine whether or not current context is in an interactive environment however even if i use jupyter the return value of sys flags interactive is still zero and the logger lever can never be setted to info and use stdout instead of stderr thanks a lot ,0
45744,2939041074.0,IssuesEvent,2015-07-01 14:24:36,HPI-SWA-Teaching/SWT15-Project-13,https://api.github.com/repos/HPI-SWA-Teaching/SWT15-Project-13,opened,Ausgabe von Rückgabewerten,priority: normal type: bug,"Die Ausgabe von Rückgabewerten sollte per ```printOn:``` passieren, nicht per ```asString```.",1.0,"Ausgabe von Rückgabewerten - Die Ausgabe von Rückgabewerten sollte per ```printOn:``` passieren, nicht per ```asString```.",0,ausgabe von rückgabewerten die ausgabe von rückgabewerten sollte per printon passieren nicht per asstring ,0
4073,15356139065.0,IssuesEvent,2021-03-01 12:01:47,mozilla-mobile/firefox-ios,https://api.github.com/repos/mozilla-mobile/firefox-ios,opened,[XCUITest] Select latest stack Xcode 12.4 and iOS version 14.4 to run the tests in all schemes,eng:automation,"For RunAllXCUITests we are still using iOS 14.0 and due to recent changes with WKWebView app is crashing and around ~100 tests failing.
",1.0,"[XCUITest] Select latest stack Xcode 12.4 and iOS version 14.4 to run the tests in all schemes - For RunAllXCUITests we are still using iOS 14.0 and due to recent changes with WKWebView app is crashing and around ~100 tests failing.
",1, select latest stack xcode and ios version to run the tests in all schemes for runallxcuitests we are still using ios and due to recent changes with wkwebview app is crashing and around tests failing ,1
5838,21391225891.0,IssuesEvent,2022-04-21 07:19:59,mozilla-mobile/firefox-ios,https://api.github.com/repos/mozilla-mobile/firefox-ios,closed,Improve automatic string import to not take changes in unrelated files,eng:automation,"In this PR the automation is changing the package.resolved file by removing one line: https://github.com/mozilla-mobile/firefox-ios/pull/10505/files#diff-6edf4db475d69aa9d1d8c8cc7cba4419a30e16fddfb130b90bf06e2a5b809cb4L142
In this case that's not critical but it could be in case there is a package change. We need to be sure that only `locale.lproj` files are changed
┆Issue is synchronized with this [Jira Task](https://mozilla-hub.atlassian.net/browse/FXIOS-4101)
",1.0,"Improve automatic string import to not take changes in unrelated files - In this PR the automation is changing the package.resolved file by removing one line: https://github.com/mozilla-mobile/firefox-ios/pull/10505/files#diff-6edf4db475d69aa9d1d8c8cc7cba4419a30e16fddfb130b90bf06e2a5b809cb4L142
In this case that's not critical but it could be in case there is a package change. We need to be sure that only `locale.lproj` files are changed
┆Issue is synchronized with this [Jira Task](https://mozilla-hub.atlassian.net/browse/FXIOS-4101)
",1,improve automatic string import to not take changes in unrelated files in this pr the automation is changing the package resolved file by removing one line in this case that s not critical but it could be in case there is a package change we need to be sure that only locale lproj files are changed ┆issue is synchronized with this ,1
5866,21508957367.0,IssuesEvent,2022-04-28 00:47:54,rancher-sandbox/rancher-desktop,https://api.github.com/repos/rancher-sandbox/rancher-desktop,opened,"Change ""restart"" to ""VM restart""",kind/enhancement area/automation,"```console
PS C:\Users\Jan\Downloads> rdctl start --container-engine containerd
Status: triggering a restart to apply changes.
```
""Restart"" is alarming because it could mean a reboot of the host, but it really is just a reboot of the VM. The message should make that clear.",1.0,"Change ""restart"" to ""VM restart"" - ```console
PS C:\Users\Jan\Downloads> rdctl start --container-engine containerd
Status: triggering a restart to apply changes.
```
""Restart"" is alarming because it could mean a reboot of the host, but it really is just a reboot of the VM. The message should make that clear.",1,change restart to vm restart console ps c users jan downloads rdctl start container engine containerd status triggering a restart to apply changes restart is alarming because it could mean a reboot of the host but it really is just a reboot of the vm the message should make that clear ,1
8684,27172086752.0,IssuesEvent,2023-02-17 20:26:37,OneDrive/onedrive-api-docs,https://api.github.com/repos/OneDrive/onedrive-api-docs,closed,HTTP Error 423 (Locked) not documented,area:Docs automation:Closed,"Hello, while using the REST API to delete a file I got a response with status 423 (Locked) and body:
```
{
""error"": {
""code"": ""accessDenied"",
""innerError"": {
""date"": ""2018-09-12T06:12:46"",
""request-id"": ""74d9b899-d03e-44c1-8e36-f3c80dc00718""
},
""message"": ""Lock token does not match existing lock""
}
}
```
The error is self-explanatory and the `accessDenied` code is documented; however the [error section](https://github.com/OneDrive/onedrive-api-docs/blob/live/docs/rest-api/concepts/errors.md) does not say that the 423 status may be returned by the API.",1.0,"HTTP Error 423 (Locked) not documented - Hello, while using the REST API to delete a file I got a response with status 423 (Locked) and body:
```
{
""error"": {
""code"": ""accessDenied"",
""innerError"": {
""date"": ""2018-09-12T06:12:46"",
""request-id"": ""74d9b899-d03e-44c1-8e36-f3c80dc00718""
},
""message"": ""Lock token does not match existing lock""
}
}
```
The error is self-explanatory and the `accessDenied` code is documented; however the [error section](https://github.com/OneDrive/onedrive-api-docs/blob/live/docs/rest-api/concepts/errors.md) does not say that the 423 status may be returned by the API.",1,http error locked not documented hello while using the rest api to delete a file i got a response with status locked and body error code accessdenied innererror date request id message lock token does not match existing lock the error is self explanatory and the accessdenied code is documented however the does not say that the status may be returned by the api ,1
1519,10272574502.0,IssuesEvent,2019-08-23 16:49:07,mozilla-mobile/fenix,https://api.github.com/repos/mozilla-mobile/fenix,opened,Temporarily disable screenshots UI tests til androidx upgrade,eng:automation,"Android Gradle Plugin 3.5.0 requires AndroidX dependencies for testing , but screengrab doesn't work yet with AndroidX. @colintheshots has contributed back a patch (see: https://github.com/fastlane/fastlane/pull/15217). Until Google maintainers pick it up and create a build, we'll need to temporarily disable the screenshots tests.
cc: @isabelrios @npark-mozilla
see also:
https://github.com/mozilla-mobile/fenix/pull/4903
",1.0,"Temporarily disable screenshots UI tests til androidx upgrade - Android Gradle Plugin 3.5.0 requires AndroidX dependencies for testing , but screengrab doesn't work yet with AndroidX. @colintheshots has contributed back a patch (see: https://github.com/fastlane/fastlane/pull/15217). Until Google maintainers pick it up and create a build, we'll need to temporarily disable the screenshots tests.
cc: @isabelrios @npark-mozilla
see also:
https://github.com/mozilla-mobile/fenix/pull/4903
",1,temporarily disable screenshots ui tests til androidx upgrade android gradle plugin requires androidx dependencies for testing but screengrab doesn t work yet with androidx colintheshots has contributed back a patch see until google maintainers pick it up and create a build we ll need to temporarily disable the screenshots tests cc isabelrios npark mozilla see also ,1
106526,16682352536.0,IssuesEvent,2021-06-08 02:32:00,vipinsun/TrustID,https://api.github.com/repos/vipinsun/TrustID,opened,WS-2020-0132 (Medium) detected in jsrsasign-7.2.2.tgz,security vulnerability,"## WS-2020-0132 - Medium Severity Vulnerability
Vulnerable Library - jsrsasign-7.2.2.tgz
opensource free pure JavaScript cryptographic library supports RSA/RSAPSS/ECDSA/DSA signing/validation, ASN.1, PKCS#1/5/8 private/public key, X.509 certificate, CRL, OCSP, CMS SignedData, TimeStamp and CAdES and JSON Web Signature(JWS)/Token(JWT)/Key(JWK)
***
Step up your Open Source Security Game with WhiteSource [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)",True,"WS-2020-0132 (Medium) detected in jsrsasign-7.2.2.tgz - ## WS-2020-0132 - Medium Severity Vulnerability
Vulnerable Library - jsrsasign-7.2.2.tgz
opensource free pure JavaScript cryptographic library supports RSA/RSAPSS/ECDSA/DSA signing/validation, ASN.1, PKCS#1/5/8 private/public key, X.509 certificate, CRL, OCSP, CMS SignedData, TimeStamp and CAdES and JSON Web Signature(JWS)/Token(JWT)/Key(JWK)
***
Step up your Open Source Security Game with WhiteSource [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)",0,ws medium detected in jsrsasign tgz ws medium severity vulnerability vulnerable library jsrsasign tgz opensource free pure javascript cryptographic library supports rsa rsapss ecdsa dsa signing validation asn pkcs private public key x certificate crl ocsp cms signeddata timestamp and cades and json web signature jws token jwt key jwk library home page a href path to dependency file trustid trustid sdk package json path to vulnerable library trustid trustid sdk node modules jsrsasign package json dependency hierarchy fabric ca client tgz root library x jsrsasign tgz vulnerable library found in head commit a href found in base branch master vulnerability details jsrsasign through is vulnerable to side channel attack publish date url a href cvss score details base score metrics exploitability metrics attack vector n a attack complexity n a privileges required n a user interaction n a scope n a impact metrics confidentiality impact n a integrity impact n a availability impact n a for more information on scores click a href suggested fix type upgrade version origin a href release date fix resolution jsrsasign step up your open source security game with whitesource ,0
314323,9595462500.0,IssuesEvent,2019-05-09 16:06:28,carbon-design-system/carbon-components-react,https://api.github.com/repos/carbon-design-system/carbon-components-react,closed,React Tooltip - Accessibility markup cleanup,Severity 3 priority: high status: waiting for author's response type: a11y ♿ type: bug 🐛,"There are some unnecessary attributes in the accessibility-related markup for the React Tooltip.
For an overview, please see the screen capture of the markup, below.
Please note:
- if the user provides visible text content for the bx--tooltip__label, then the button should use `aria-labelledby` to point to its id, and if the user doesn't have visible text, then they need to provide an aria-label for the button
- the component should have a sensible default aria-label, like ""Info"" or ""Help"" (not ""tooltip""), and this default should be published in the doc for the aria-label prop (although users should be encouraged to provide their own)
Please delete:
- title - it is completely unnecessary
- aria-owns - this attribute is not used in the tooltip pattern
- role=""img"" and aria-label=""tooltip"" on the svg - these are not necessary because the button label overrides the image label (or keep role=""img"" and aria-label[ledby] on the svg and remove aria-label[ledby] from the button)
- alt=""tooltip"" on the svg - just delete this - alt is not a valid attribute on svg elements
- aria-labelledby on the tooltip div - this is not used in the tooltip pattern
Please consider:
- try to use a button element instead of `div role=""button"" tabindex=""0""` ... and use onclick instead of handling space and enter keys
- feel free to test with [this little test case](https://carmacleod.github.io/playground/tooltip-test.html) before switching to button. It uses a real button with (mostly) bx styles, and it looks ok.

",1.0,"React Tooltip - Accessibility markup cleanup - There are some unnecessary attributes in the accessibility-related markup for the React Tooltip.
For an overview, please see the screen capture of the markup, below.
Please note:
- if the user provides visible text content for the bx--tooltip__label, then the button should use `aria-labelledby` to point to its id, and if the user doesn't have visible text, then they need to provide an aria-label for the button
- the component should have a sensible default aria-label, like ""Info"" or ""Help"" (not ""tooltip""), and this default should be published in the doc for the aria-label prop (although users should be encouraged to provide their own)
Please delete:
- title - it is completely unnecessary
- aria-owns - this attribute is not used in the tooltip pattern
- role=""img"" and aria-label=""tooltip"" on the svg - these are not necessary because the button label overrides the image label (or keep role=""img"" and aria-label[ledby] on the svg and remove aria-label[ledby] from the button)
- alt=""tooltip"" on the svg - just delete this - alt is not a valid attribute on svg elements
- aria-labelledby on the tooltip div - this is not used in the tooltip pattern
Please consider:
- try to use a button element instead of `div role=""button"" tabindex=""0""` ... and use onclick instead of handling space and enter keys
- feel free to test with [this little test case](https://carmacleod.github.io/playground/tooltip-test.html) before switching to button. It uses a real button with (mostly) bx styles, and it looks ok.

",0,react tooltip accessibility markup cleanup there are some unnecessary attributes in the accessibility related markup for the react tooltip for an overview please see the screen capture of the markup below please note if the user provides visible text content for the bx tooltip label then the button should use aria labelledby to point to its id and if the user doesn t have visible text then they need to provide an aria label for the button the component should have a sensible default aria label like info or help not tooltip and this default should be published in the doc for the aria label prop although users should be encouraged to provide their own please delete title it is completely unnecessary aria owns this attribute is not used in the tooltip pattern role img and aria label tooltip on the svg these are not necessary because the button label overrides the image label or keep role img and aria label on the svg and remove aria label from the button alt tooltip on the svg just delete this alt is not a valid attribute on svg elements aria labelledby on the tooltip div this is not used in the tooltip pattern please consider try to use a button element instead of div role button tabindex and use onclick instead of handling space and enter keys feel free to test with before switching to button it uses a real button with mostly bx styles and it looks ok ,0
1120,9534991845.0,IssuesEvent,2019-04-30 04:41:23,askmench/mench-web-app,https://api.github.com/repos/askmench/mench-web-app,opened,Action Plan webview option to change OR answer,Bot/Chat-Automation Inputs/Forms,In the MVP version there is no function to change OR answers once selected by students. We can later build this functionality. ,1.0,Action Plan webview option to change OR answer - In the MVP version there is no function to change OR answers once selected by students. We can later build this functionality. ,1,action plan webview option to change or answer in the mvp version there is no function to change or answers once selected by students we can later build this functionality ,1
20960,27817510295.0,IssuesEvent,2023-03-18 21:19:10,cse442-at-ub/project_s23-iweatherify,https://api.github.com/repos/cse442-at-ub/project_s23-iweatherify,closed,Save the units and temperature settings to the database,Processing Task Sprint 2,"**Task Tests**
*Test 1*
1. Go to the following URL: https://github.com/cse442-at-ub/project_s23-iweatherify/tree/dev
2. Click on the green `<> Code` button and download the ZIP file.

3. Unzip the downloaded file to a folder on your computer.
4. Open a terminal and navigate to the git repository folder using the `cd` command.
5. Run the `npm install` command in the terminal to install the necessary dependencies.
6. Run the `npm start` command in the terminal to start the application.
7. Check the output from the npm start command for the URL to access the application. The URL should be a localhost address (e.g., http://localhost:8080).
8. Navigate to http://localhost:8080/#/login
9. Ensure you have logged in to our app to see the page use UserID: `UB442` and Password:`Myub442@!` to login
10. Go to URL: http://localhost:8080/#/unitsSettings
11. Verify that the units page is displayed

12. Change the temperature unit to Celsius (°C)
13. Change the wind unit to km/h
14. Change the pressure unit to mm
15. Change the distance unit to km
16. Open the browser inspector tool and select console
17. Click the save button
18. You should see the message: `Units saved successfully.` on the page

19. You should see the message: `{message: 'User settings saved successfully.'}` in the console

18. Open a different tab and go to: https://www-student.cse.buffalo.edu/tools/db/phpmyadmin/index.php
19. Input username: `jpan26` and password: `50314999`
20. Make sure the server choice is `oceanus.cse.buffalo.edu:3306`
21. Click go and you should see this page

22. Click `cse442_2023_spring_team_a_db` first and then `saved_units` on the left side of the page

23. Verify you see a row with the exact same information as shown by the picture

*Test 2*
1. Repeat steps 1 to 9 from `Test 1`
2. Go to URL: http://localhost:8080/#/tempSettings
3. Verify that the temperature setting page is displayed

4. Open the browser inspector tool and select console
5. Change the hot temperature to 80, you can either use the slider or input box and click save
6. You should see the message: `{result: 'success'}` in the console

7. You should see the message: `Temperatures Saved Successfully` on the page

8. Change the warm temperature to 65, you can either use the slider or input box and click save
9. You should see the message: `{result: 'success'}` in the console

10. You should see the message: `Temperatures Saved Successfully` on the page

11. Change the ideal temperature to 50, you can either use the slider or input box and click save
12. You should see the message: `{result: 'success'}` in the console

13. You should see the message: `Temperatures Saved Successfully` on the page

14. Change the chilly temperature to 0, you can either use the slider or input box and click save
15. You should see the message: `{result: 'success'}` in the console

16. You should see the message: `Temperatures Saved Successfully` on the page

17. Change the cold temperature to -65, you can either use the slider or input box and click save
18. You should see the message: `{result: 'success'}` in the console

19. You should see the message: `Temperatures Saved Successfully` on the page

20. Change the freezing temperature to -80, you can either use the slider or input box and click save
21. You should see the message: `{result: 'success'}` in the console

22. You should see the message: `Temperatures Saved Successfully` on the page

23. Repeat steps 18 to 21 from `Test 1`
24. Click `cse442_2023_spring_team_a_db` first and then `saved_temperatures` on the left side of the page

25. Verify you see a row with the exact same information as shown by the picture
",1.0,"Save the units and temperature settings to the database - **Task Tests**
*Test 1*
1. Go to the following URL: https://github.com/cse442-at-ub/project_s23-iweatherify/tree/dev
2. Click on the green `<> Code` button and download the ZIP file.

3. Unzip the downloaded file to a folder on your computer.
4. Open a terminal and navigate to the git repository folder using the `cd` command.
5. Run the `npm install` command in the terminal to install the necessary dependencies.
6. Run the `npm start` command in the terminal to start the application.
7. Check the output from the npm start command for the URL to access the application. The URL should be a localhost address (e.g., http://localhost:8080).
8. Navigate to http://localhost:8080/#/login
9. Ensure you have logged in to our app to see the page use UserID: `UB442` and Password:`Myub442@!` to login
10. Go to URL: http://localhost:8080/#/unitsSettings
11. Verify that the units page is displayed

12. Change the temperature unit to Celsius (°C)
13. Change the wind unit to km/h
14. Change the pressure unit to mm
15. Change the distance unit to km
16. Open the browser inspector tool and select console
17. Click the save button
18. You should see the message: `Units saved successfully.` on the page

19. You should see the message: `{message: 'User settings saved successfully.'}` in the console

18. Open a different tab and go to: https://www-student.cse.buffalo.edu/tools/db/phpmyadmin/index.php
19. Input username: `jpan26` and password: `50314999`
20. Make sure the server choice is `oceanus.cse.buffalo.edu:3306`
21. Click go and you should see this page

22. Click `cse442_2023_spring_team_a_db` first and then `saved_units` on the left side of the page

23. Verify you see a row with the exact same information as shown by the picture

*Test 2*
1. Repeat steps 1 to 9 from `Test 1`
2. Go to URL: http://localhost:8080/#/tempSettings
3. Verify that the temperature setting page is displayed

4. Open the browser inspector tool and select console
5. Change the hot temperature to 80, you can either use the slider or input box and click save
6. You should see the message: `{result: 'success'}` in the console

7. You should see the message: `Temperatures Saved Successfully` on the page

8. Change the warm temperature to 65, you can either use the slider or input box and click save
9. You should see the message: `{result: 'success'}` in the console

10. You should see the message: `Temperatures Saved Successfully` on the page

11. Change the ideal temperature to 50, you can either use the slider or input box and click save
12. You should see the message: `{result: 'success'}` in the console

13. You should see the message: `Temperatures Saved Successfully` on the page

14. Change the chilly temperature to 0, you can either use the slider or input box and click save
15. You should see the message: `{result: 'success'}` in the console

16. You should see the message: `Temperatures Saved Successfully` on the page

17. Change the cold temperature to -65, you can either use the slider or input box and click save
18. You should see the message: `{result: 'success'}` in the console

19. You should see the message: `Temperatures Saved Successfully` on the page

20. Change the freezing temperature to -80, you can either use the slider or input box and click save
21. You should see the message: `{result: 'success'}` in the console

22. You should see the message: `Temperatures Saved Successfully` on the page

23. Repeat steps 18 to 21 from `Test 1`
24. Click `cse442_2023_spring_team_a_db` first and then `saved_temperatures` on the left side of the page

25. Verify you see a row with the exact same information as shown by the picture
",0,save the units and temperature settings to the database task tests test go to the following url click on the green code button and download the zip file unzip the downloaded file to a folder on your computer open a terminal and navigate to the git repository folder using the cd command run the npm install command in the terminal to install the necessary dependencies run the npm start command in the terminal to start the application check the output from the npm start command for the url to access the application the url should be a localhost address e g navigate to ensure you have logged in to our app to see the page use userid and password to login go to url verify that the units page is displayed change the temperature unit to celsius °c change the wind unit to km h change the pressure unit to mm change the distance unit to km open the browser inspector tool and select console click the save button you should see the message units saved successfully on the page you should see the message message user settings saved successfully in the console open a different tab and go to input username and password make sure the server choice is oceanus cse buffalo edu click go and you should see this page click spring team a db first and then saved units on the left side of the page verify you see a row with the exact same information as shown by the picture test repeat steps to from test go to url verify that the temperature setting page is displayed open the browser inspector tool and select console change the hot temperature to you can either use the slider or input box and click save you should see the message result success in the console you should see the message temperatures saved successfully on the page change the warm temperature to you can either use the slider or input box and click save you should see the message result success in the console you should see the message temperatures saved successfully on the page change the ideal temperature to you can either use the slider or input box and click save you should see the message result success in the console you should see the message temperatures saved successfully on the page change the chilly temperature to you can either use the slider or input box and click save you should see the message result success in the console you should see the message temperatures saved successfully on the page change the cold temperature to you can either use the slider or input box and click save you should see the message result success in the console you should see the message temperatures saved successfully on the page change the freezing temperature to you can either use the slider or input box and click save you should see the message result success in the console you should see the message temperatures saved successfully on the page repeat steps to from test click spring team a db first and then saved temperatures on the left side of the page verify you see a row with the exact same information as shown by the picture ,0
2081,11360349944.0,IssuesEvent,2020-01-26 05:56:51,sourcegraph/sourcegraph,https://api.github.com/repos/sourcegraph/sourcegraph,closed,`scopeQuery` does not filter down repository results,automation bug customer,"Reported by https://app.hubspot.com/contacts/2762526/contact/17877751 that the `scopeQuery` for a8n campaigns was not matching the number of search results when using `repohasfile` filter.
[Slack thread](https://sourcegraph.slack.com/archives/CMMTWQQ49/p1579711776061700) notes:
> Ohh, boy. I think I've got it: I need to call `.Results()` again on the `results` here: https://github.com/sourcegraph/sourcegraph/blob/11b5ebbe3458c01d9d35a766fbfa4b07b3472be6/cmd/frontend/graphqlbackend/search.go#L756-L760
",1.0,"`scopeQuery` does not filter down repository results - Reported by https://app.hubspot.com/contacts/2762526/contact/17877751 that the `scopeQuery` for a8n campaigns was not matching the number of search results when using `repohasfile` filter.
[Slack thread](https://sourcegraph.slack.com/archives/CMMTWQQ49/p1579711776061700) notes:
> Ohh, boy. I think I've got it: I need to call `.Results()` again on the `results` here: https://github.com/sourcegraph/sourcegraph/blob/11b5ebbe3458c01d9d35a766fbfa4b07b3472be6/cmd/frontend/graphqlbackend/search.go#L756-L760
",1, scopequery does not filter down repository results reported by that the scopequery for campaigns was not matching the number of search results when using repohasfile filter notes ohh boy i think i ve got it i need to call results again on the results here ,1
4255,15887660844.0,IssuesEvent,2021-04-10 03:30:11,pulumi/pulumi,https://api.github.com/repos/pulumi/pulumi,closed,"[Automation API] Unintuitive behavior with relative paths (FileAsset, closure serialization) and inline programs",area/automation-api kind/enhancement language/go language/javascript resolution/fixed,"Automation API inline programs create a temporary working directory (unless directly specified) to store pulumi.yaml and to invoke the CLI. This breaks any relative path references the user may define in their inline program:
```ts
const pulumiProgram = () async => {
// at runtime this actually resolves to $AUTOMATION_API_TEMP/build
const relative = ""./build""
}
```
We have a few options:
1. Change Automation API to use `.` as the current working dir. This preserves relative paths, but has the downside of leaking files like `pulumi.yaml` and `pulumi.stack.yaml` which the user might not care about or might find confusing to see generated out of nowhere.
2. Document this behavior and leave it as is. encourage users to only use absolute paths with inline programs.
3. Do some sort of deeper fix where we specify the temp directory to be used only for settings (yaml files), and use `.` as the CWD for program execution. ",1.0,"[Automation API] Unintuitive behavior with relative paths (FileAsset, closure serialization) and inline programs - Automation API inline programs create a temporary working directory (unless directly specified) to store pulumi.yaml and to invoke the CLI. This breaks any relative path references the user may define in their inline program:
```ts
const pulumiProgram = () async => {
// at runtime this actually resolves to $AUTOMATION_API_TEMP/build
const relative = ""./build""
}
```
We have a few options:
1. Change Automation API to use `.` as the current working dir. This preserves relative paths, but has the downside of leaking files like `pulumi.yaml` and `pulumi.stack.yaml` which the user might not care about or might find confusing to see generated out of nowhere.
2. Document this behavior and leave it as is. encourage users to only use absolute paths with inline programs.
3. Do some sort of deeper fix where we specify the temp directory to be used only for settings (yaml files), and use `.` as the CWD for program execution. ",1, unintuitive behavior with relative paths fileasset closure serialization and inline programs automation api inline programs create a temporary working directory unless directly specified to store pulumi yaml and to invoke the cli this breaks any relative path references the user may define in their inline program ts const pulumiprogram async at runtime this actually resolves to automation api temp build const relative build we have a few options change automation api to use as the current working dir this preserves relative paths but has the downside of leaking files like pulumi yaml and pulumi stack yaml which the user might not care about or might find confusing to see generated out of nowhere document this behavior and leave it as is encourage users to only use absolute paths with inline programs do some sort of deeper fix where we specify the temp directory to be used only for settings yaml files and use as the cwd for program execution ,1
288516,31861429545.0,IssuesEvent,2023-09-15 11:12:55,nidhi7598/linux-v4.19.72_CVE-2022-3564,https://api.github.com/repos/nidhi7598/linux-v4.19.72_CVE-2022-3564,opened,"CVE-2022-3565 (High) detected in linuxlinux-4.19.294, linuxlinux-4.19.294",Mend: dependency security vulnerability,"## CVE-2022-3565 - High Severity Vulnerability
Vulnerable Libraries - linuxlinux-4.19.294, linuxlinux-4.19.294
Vulnerability Details
A vulnerability, which was classified as critical, has been found in Linux Kernel. Affected by this issue is the function del_timer of the file drivers/isdn/mISDN/l1oip_core.c of the component Bluetooth. The manipulation leads to use after free. It is recommended to apply a patch to fix this issue. The identifier of this vulnerability is VDB-211088.
***
Step up your Open Source Security Game with Mend [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)",True,"CVE-2022-3565 (High) detected in linuxlinux-4.19.294, linuxlinux-4.19.294 - ## CVE-2022-3565 - High Severity Vulnerability
Vulnerable Libraries - linuxlinux-4.19.294, linuxlinux-4.19.294
Vulnerability Details
A vulnerability, which was classified as critical, has been found in Linux Kernel. Affected by this issue is the function del_timer of the file drivers/isdn/mISDN/l1oip_core.c of the component Bluetooth. The manipulation leads to use after free. It is recommended to apply a patch to fix this issue. The identifier of this vulnerability is VDB-211088.
***
Step up your Open Source Security Game with Mend [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)",0,cve high detected in linuxlinux linuxlinux cve high severity vulnerability vulnerable libraries linuxlinux linuxlinux vulnerability details a vulnerability which was classified as critical has been found in linux kernel affected by this issue is the function del timer of the file drivers isdn misdn core c of the component bluetooth the manipulation leads to use after free it is recommended to apply a patch to fix this issue the identifier of this vulnerability is vdb publish date url a href cvss score details base score metrics exploitability metrics attack vector local attack complexity low privileges required low user interaction none scope unchanged impact metrics confidentiality impact high integrity impact high availability impact high for more information on scores click a href suggested fix type upgrade version origin a href release date fix resolution step up your open source security game with mend ,0
5357,19295477955.0,IssuesEvent,2021-12-12 14:15:31,Azure/PSRule.Rules.Azure,https://api.github.com/repos/Azure/PSRule.Rules.Azure,closed,Automation accounts should enable diagnostic logs,rule: automation-account,"# Rule request
Automation accounts should enable the following diagnostic logs:
- JobLogs
- JobStreams
- DSCNodeStatus
- Metrics
## Applies to the following
The rule applies to the following:
- Resource type: **Microsoft.Automation/automationAccounts**
## Additional context
[Template reference](https://docs.microsoft.com/en-us/azure/templates/microsoft.automation/automationaccounts?tabs=bicep)
",1.0,"Automation accounts should enable diagnostic logs - # Rule request
Automation accounts should enable the following diagnostic logs:
- JobLogs
- JobStreams
- DSCNodeStatus
- Metrics
## Applies to the following
The rule applies to the following:
- Resource type: **Microsoft.Automation/automationAccounts**
## Additional context
[Template reference](https://docs.microsoft.com/en-us/azure/templates/microsoft.automation/automationaccounts?tabs=bicep)
",1,automation accounts should enable diagnostic logs rule request automation accounts should enable the following diagnostic logs joblogs jobstreams dscnodestatus metrics applies to the following the rule applies to the following resource type microsoft automation automationaccounts additional context ,1
1047,9257177564.0,IssuesEvent,2019-03-17 03:00:32,askmench/mench-web-app,https://api.github.com/repos/askmench/mench-web-app,opened,Ping for Status Level-Up,Bot/Chat-Automation Communication Tool Team Communication,"Since a single person would not be able to publish content on Mench as everyone would require at-least 1 other person to review/iterate their work, we can add a feature to ""Ping"" another miner to a particular intent/entity to have it published. It's a communication tool that would help miners use each other's help to publish content to Mench.
Workflow:
1. Miner mines intents/messages they want to mine
2. Once they feel ready for review, they would ping either a specific miner or ""any"" miner and Mench personal assistant would send a message to miners to notify them about the intent/message that needs attention
3. The second miner loads up the intent, does the review, iterates the content if needed, and then changes the status by 1 level. Note that there is a special permission called ""[Double Status Level-Up](https://mench.com/entities/6084)"" which if set to a miner, would allow them to change the status of New to Published if they deem the intent is ready to go live. They can also choose to level-up by one (new to drafting) and then have some other miner (could be the original author again) do another review/iteration and then level-up from drafting to published.
The idea is to create a step-by-step inter-dependancy workflow designed around the principles of collaboration.
@grumo Thoughts?",1.0,"Ping for Status Level-Up - Since a single person would not be able to publish content on Mench as everyone would require at-least 1 other person to review/iterate their work, we can add a feature to ""Ping"" another miner to a particular intent/entity to have it published. It's a communication tool that would help miners use each other's help to publish content to Mench.
Workflow:
1. Miner mines intents/messages they want to mine
2. Once they feel ready for review, they would ping either a specific miner or ""any"" miner and Mench personal assistant would send a message to miners to notify them about the intent/message that needs attention
3. The second miner loads up the intent, does the review, iterates the content if needed, and then changes the status by 1 level. Note that there is a special permission called ""[Double Status Level-Up](https://mench.com/entities/6084)"" which if set to a miner, would allow them to change the status of New to Published if they deem the intent is ready to go live. They can also choose to level-up by one (new to drafting) and then have some other miner (could be the original author again) do another review/iteration and then level-up from drafting to published.
The idea is to create a step-by-step inter-dependancy workflow designed around the principles of collaboration.
@grumo Thoughts?",1,ping for status level up since a single person would not be able to publish content on mench as everyone would require at least other person to review iterate their work we can add a feature to ping another miner to a particular intent entity to have it published it s a communication tool that would help miners use each other s help to publish content to mench workflow miner mines intents messages they want to mine once they feel ready for review they would ping either a specific miner or any miner and mench personal assistant would send a message to miners to notify them about the intent message that needs attention the second miner loads up the intent does the review iterates the content if needed and then changes the status by level note that there is a special permission called which if set to a miner would allow them to change the status of new to published if they deem the intent is ready to go live they can also choose to level up by one new to drafting and then have some other miner could be the original author again do another review iteration and then level up from drafting to published the idea is to create a step by step inter dependancy workflow designed around the principles of collaboration grumo thoughts ,1
739268,25588465743.0,IssuesEvent,2022-12-01 11:07:16,markmcsherry/testproj,https://api.github.com/repos/markmcsherry/testproj,opened,US - Uber Feature,type:user-story :moneybag: priority:2,"**As a _persona_ I want to _do something_ so that I can _achieve some benefit_**
### Description
Who what where & why...
And a few more details...
---
### Design
What's it going to look like
### Acceptance Criteria
...
### Notes
---
### Tasks
- [ ]
",1.0,"US - Uber Feature - **As a _persona_ I want to _do something_ so that I can _achieve some benefit_**
### Description
Who what where & why...
And a few more details...
---
### Design
What's it going to look like
### Acceptance Criteria
...
### Notes
---
### Tasks
- [ ]
",0,us uber feature as a persona i want to do something so that i can achieve some benefit description who what where why and a few more details design what s it going to look like acceptance criteria notes tasks ,0
62086,6775884543.0,IssuesEvent,2017-10-27 15:44:19,apache/incubator-openwhisk-wskdeploy,https://api.github.com/repos/apache/incubator-openwhisk-wskdeploy,closed,WIP: Enable Action Limits unit test,priority: high tests: unit,"At some point, the unit test for testing Action Limits within **parsers/manifest_parser_test.go** called ""_TestComposeActionsForLimits_"" was commented out with a **TODO**: _""uncomment this test case after issue # 312 is fixed""_
Issue 312 has been closed and merged via PR 556, yet this test remains commented out.
- https://github.com/apache/incubator-openwhisk-wskdeploy/issues/312
- https://github.com/apache/incubator-openwhisk-wskdeploy/pull/556
which confuses me further, as PR 556 added the testcase that was commented out.
We need a working unit test AND figure out when/why it was commented out
",1.0,"WIP: Enable Action Limits unit test - At some point, the unit test for testing Action Limits within **parsers/manifest_parser_test.go** called ""_TestComposeActionsForLimits_"" was commented out with a **TODO**: _""uncomment this test case after issue # 312 is fixed""_
Issue 312 has been closed and merged via PR 556, yet this test remains commented out.
- https://github.com/apache/incubator-openwhisk-wskdeploy/issues/312
- https://github.com/apache/incubator-openwhisk-wskdeploy/pull/556
which confuses me further, as PR 556 added the testcase that was commented out.
We need a working unit test AND figure out when/why it was commented out
",0,wip enable action limits unit test at some point the unit test for testing action limits within parsers manifest parser test go called testcomposeactionsforlimits was commented out with a todo uncomment this test case after issue is fixed issue has been closed and merged via pr yet this test remains commented out which confuses me further as pr added the testcase that was commented out we need a working unit test and figure out when why it was commented out ,0
9888,30707273794.0,IssuesEvent,2023-07-27 07:17:22,red-hat-storage/ocs-ci,https://api.github.com/repos/red-hat-storage/ocs-ci,opened,rename System Capacity to System capacity on UI,ui_automation,"new Header fails the test test_dashboard_validation_ui on 4.14 and 4.13
Pay attention on other headers, the second word of the header starts from lower case letter
Failure:
https://reportportal-ocs4.apps.ocp-c1.prod.psi.redhat.com/ui/#ocs/launches/557/13044/596923/597228/597230/log
http://magna002.ceph.redhat.com/ocsci-jenkins/openshift-clusters/j-291ai3c333-t1/j-291ai3c333-t1_20230725T134831/logs/ui_logs_dir_1690296996/screenshots_ui/test_dashboard_validation_ui/2023-07-26T06-02-46.842175.png",1.0,"rename System Capacity to System capacity on UI - new Header fails the test test_dashboard_validation_ui on 4.14 and 4.13
Pay attention on other headers, the second word of the header starts from lower case letter
Failure:
https://reportportal-ocs4.apps.ocp-c1.prod.psi.redhat.com/ui/#ocs/launches/557/13044/596923/597228/597230/log
http://magna002.ceph.redhat.com/ocsci-jenkins/openshift-clusters/j-291ai3c333-t1/j-291ai3c333-t1_20230725T134831/logs/ui_logs_dir_1690296996/screenshots_ui/test_dashboard_validation_ui/2023-07-26T06-02-46.842175.png",1,rename system capacity to system capacity on ui new header fails the test test dashboard validation ui on and pay attention on other headers the second word of the header starts from lower case letter failure ,1
277055,30602418261.0,IssuesEvent,2023-07-22 15:04:04,TolMen/Project5_OC_Blog,https://api.github.com/repos/TolMen/Project5_OC_Blog,opened,[#8] Checking for security vulnerabilities,bug security,"Perform security testing to ensure there are no security vulnerabilities (XSS, CSRF, SQL Injection, etc.) in the blog",True,"[#8] Checking for security vulnerabilities - Perform security testing to ensure there are no security vulnerabilities (XSS, CSRF, SQL Injection, etc.) in the blog",0, checking for security vulnerabilities perform security testing to ensure there are no security vulnerabilities xss csrf sql injection etc in the blog,0
32833,15684464570.0,IssuesEvent,2021-03-25 10:01:59,PrehistoricKingdom/feedback,https://api.github.com/repos/PrehistoricKingdom/feedback,closed,PK start loading bug,duplicate performance saving-loading,when the loading bar is loading it goes back and then the game will keep loading then stop and crash or when i get in to the game and its fine but then will crash for no reason,True,PK start loading bug - when the loading bar is loading it goes back and then the game will keep loading then stop and crash or when i get in to the game and its fine but then will crash for no reason,0,pk start loading bug when the loading bar is loading it goes back and then the game will keep loading then stop and crash or when i get in to the game and its fine but then will crash for no reason,0
8096,26170420078.0,IssuesEvent,2023-01-01 21:08:47,tm24fan8/Home-Assistant-Configs,https://api.github.com/repos/tm24fan8/Home-Assistant-Configs,opened,Continue making Holiday Mode more versatile,enhancement lighting presence detection automation,"Need to support more holidays, right now it's mainly set up for Christmas.",1.0,"Continue making Holiday Mode more versatile - Need to support more holidays, right now it's mainly set up for Christmas.",1,continue making holiday mode more versatile need to support more holidays right now it s mainly set up for christmas ,1
637146,20622013073.0,IssuesEvent,2022-03-07 18:21:15,grpc/grpc,https://api.github.com/repos/grpc/grpc,opened,ruby macos build flake - /bin/sh: /bin/sh: cannot execute binary file,kind/bug priority/P2,"```
[C] Compiling src/core/ext/upbdefs-generated/xds/annotations/v3/sensitive.upbdefs.c
mkdir -p `dirname /Volumes/BuildData/tmpfs/altsrc/github/grpc/workspace_ruby_native_gem_macos_darwin/tmp/universal-darwin/grpc_c/3.0.0/objs/opt/src/core/ext/upbdefs-generated/xds/annotations/v3/sensitive.upbdefs.o`
clang -fdeclspec -Ithird_party/boringssl-with-bazel/src/include -Ithird_party/address_sorting/include -Ithird_party/cares/cares/include -Ithird_party/cares -Ithird_party/cares/cares -DGPR_BACKWARDS_COMPATIBILITY_MODE -DGRPC_XDS_USER_AGENT_NAME_SUFFIX=""\""RUBY\"""" -DGRPC_XDS_USER_AGENT_VERSION_SUFFIX=""\""1.45.0.dev\"""" -g -Wall -Wextra -DOSATOMIC_USE_INLINED=1 -Ithird_party/abseil-cpp -Ithird_party/re2 -Ithird_party/upb -Isrc/core/ext/upb-generated -Isrc/core/ext/upbdefs-generated -Ithird_party/xxhash -O2 -Wframe-larger-than=16384 -fPIC -I. -Iinclude -I/Volumes/BuildData/tmpfs/altsrc/github/grpc/workspace_ruby_native_gem_macos_darwin/tmp/universal-darwin/grpc_c/3.0.0/gens -I/usr/local/include -DNDEBUG -DINSTALL_PREFIX=\""/usr/local\"" -arch i386 -arch x86_64 -Ithird_party/zlib -std=c99 -Wextra-semi -g -MMD -MF /Volumes/BuildData/tmpfs/altsrc/github/grpc/workspace_ruby_native_gem_macos_darwin/tmp/universal-darwin/grpc_c/3.0.0/objs/opt/src/core/ext/upbdefs-generated/xds/annotations/v3/sensitive.upbdefs.dep -c -o /Volumes/BuildData/tmpfs/altsrc/github/grpc/workspace_ruby_native_gem_macos_darwin/tmp/universal-darwin/grpc_c/3.0.0/objs/opt/src/core/ext/upbdefs-generated/xds/annotations/v3/sensitive.upbdefs.o src/core/ext/upbdefs-generated/xds/annotations/v3/sensitive.upbdefs.c
[C] Compiling src/core/ext/upbdefs-generated/xds/annotations/v3/status.upbdefs.c
/bin/sh: /bin/sh: cannot execute binary file
mkdir -p `dirname /Volumes/BuildData/tmpfs/altsrc/github/grpc/workspace_ruby_native_gem_macos_darwin/tmp/universal-darwin/grpc_c/3.0.0/objs/opt/src/core/ext/upbdefs-generated/xds/annotations/v3/status.upbdefs.o`
make: *** [/Volumes/BuildData/tmpfs/altsrc/github/grpc/workspace_ruby_native_gem_macos_darwin/tmp/universal-darwin/grpc_c/3.0.0/objs/opt/src/core/ext/upbdefs-generated/xds/annotations/v3/sensitive.upbdefs.o] Error 126
make: *** Waiting for unfinished jobs....
clang -fdeclspec -Ithird_party/boringssl-with-bazel/src/include -Ithird_party/address_sorting/include -Ithird_party/cares/cares/include -Ithird_party/cares -Ithird_party/cares/cares -DGPR_BACKWARDS_COMPATIBILITY_MODE -DGRPC_XDS_USER_AGENT_NAME_SUFFIX=""\""RUBY\"""" -DGRPC_XDS_USER_AGENT_VERSION_SUFFIX=""\""1.45.0.dev\"""" -g -Wall -Wextra -DOSATOMIC_USE_INLINED=1 -Ithird_party/abseil-cpp -Ithird_party/re2 -Ithird_party/upb -Isrc/core/ext/upb-generated -Isrc/core/ext/upbdefs-generated -Ithird_party/xxhash -O2 -Wframe-larger-than=16384 -fPIC -I. -Iinclude -I/Volumes/BuildData/tmpfs/altsrc/github/grpc/workspace_ruby_native_gem_macos_darwin/tmp/universal-darwin/grpc_c/3.0.0/gens -I/usr/local/include -DNDEBUG -DINSTALL_PREFIX=\""/usr/local\"" -arch i386 -arch x86_64 -Ithird_party/zlib -std=c99 -Wextra-semi -g -MMD -MF /Volumes/BuildData/tmpfs/altsrc/github/grpc/workspace_ruby_native_gem_macos_darwin/tmp/universal-darwin/grpc_c/3.0.0/objs/opt/src/core/ext/upbdefs-generated/xds/annotations/v3/status.upbdefs.dep -c -o /Volumes/BuildData/tmpfs/altsrc/github/grpc/workspace_ruby_native_gem_macos_darwin/tmp/universal-darwin/grpc_c/3.0.0/objs/opt/src/core/ext/upbdefs-generated/xds/annotations/v3/status.upbdefs.o src/core/ext/upbdefs-generated/xds/annotations/v3/status.upbdefs.c
*** ../../../../src/ruby/ext/grpc/extconf.rb failed ***
Could not create Makefile due to some reason, probably lack of necessary
libraries and/or headers. Check the mkmf.log file for more details. You may
need configuration options.
Provided configuration options:
--with-opt-dir
--without-opt-dir
--with-opt-include
--without-opt-include=${opt-dir}/include
--with-opt-lib
--without-opt-lib=${opt-dir}/lib
--with-make-prog
--without-make-prog
--srcdir=../../../../src/ruby/ext/grpc
--curdir
--ruby=/Users/kbuilder/.rake-compiler/ruby/x86_64-darwin11/ruby-3.0.0/bin/$(RUBY_BASE_NAME)
rake aborted!
Command failed with status (1): [/Users/kbuilder/.rvm/rubies/ruby-2.5.0/bin...]
/Volumes/BuildData/tmpfs/altsrc/github/grpc/workspace_ruby_native_gem_macos_darwin/bundle_local_gems/ruby/2.5.0/gems/rake-compiler-1.1.1/lib/rake/extensiontask.rb:206:in `block (2 levels) in define_compile_tasks'
/Volumes/BuildData/tmpfs/altsrc/github/grpc/workspace_ruby_native_gem_macos_darwin/bundle_local_gems/ruby/2.5.0/gems/rake-compiler-1.1.1/lib/rake/extensiontask.rb:203:in `block in define_compile_tasks'
/Volumes/BuildData/tmpfs/altsrc/github/grpc/workspace_ruby_native_gem_macos_darwin/bundle_local_gems/ruby/2.5.0/gems/rake-13.0.6/exe/rake:27:in `'
/Users/kbuilder/.rvm/gems/ruby-2.5.0/bin/bundle:25:in `load'
/Users/kbuilder/.rvm/gems/ruby-2.5.0/bin/bundle:25:in `'
Tasks: TOP => native => native:universal-darwin => native:grpc:universal-darwin => tmp/universal-darwin/stage/src/ruby/lib/grpc/3.0/grpc_c.bundle => copy:grpc_c:universal-darwin:3.0.0 => tmp/universal-darwin/grpc_c/3.0.0/grpc_c.bundle => tmp/universal-darwin/grpc_c/3.0.0/Makefile
(See full trace by running task with --trace)
+ '[' Darwin == Darwin ']'
++ ls 'pkg/*.gem'
++ grep -v darwin
ls: pkg/*.gem: No such file or directory
+ rm
usage: rm [-f | -i] [-dPRrvW] file ...
unlink file
```
https://source.cloud.google.com/results/invocations/a5266702-9573-493b-a946-2f0a779cbd1e/targets/grpc%2Fcore%2Fpull_request%2Fmacos%2Fgrpc_build_artifacts/log",1.0,"ruby macos build flake - /bin/sh: /bin/sh: cannot execute binary file - ```
[C] Compiling src/core/ext/upbdefs-generated/xds/annotations/v3/sensitive.upbdefs.c
mkdir -p `dirname /Volumes/BuildData/tmpfs/altsrc/github/grpc/workspace_ruby_native_gem_macos_darwin/tmp/universal-darwin/grpc_c/3.0.0/objs/opt/src/core/ext/upbdefs-generated/xds/annotations/v3/sensitive.upbdefs.o`
clang -fdeclspec -Ithird_party/boringssl-with-bazel/src/include -Ithird_party/address_sorting/include -Ithird_party/cares/cares/include -Ithird_party/cares -Ithird_party/cares/cares -DGPR_BACKWARDS_COMPATIBILITY_MODE -DGRPC_XDS_USER_AGENT_NAME_SUFFIX=""\""RUBY\"""" -DGRPC_XDS_USER_AGENT_VERSION_SUFFIX=""\""1.45.0.dev\"""" -g -Wall -Wextra -DOSATOMIC_USE_INLINED=1 -Ithird_party/abseil-cpp -Ithird_party/re2 -Ithird_party/upb -Isrc/core/ext/upb-generated -Isrc/core/ext/upbdefs-generated -Ithird_party/xxhash -O2 -Wframe-larger-than=16384 -fPIC -I. -Iinclude -I/Volumes/BuildData/tmpfs/altsrc/github/grpc/workspace_ruby_native_gem_macos_darwin/tmp/universal-darwin/grpc_c/3.0.0/gens -I/usr/local/include -DNDEBUG -DINSTALL_PREFIX=\""/usr/local\"" -arch i386 -arch x86_64 -Ithird_party/zlib -std=c99 -Wextra-semi -g -MMD -MF /Volumes/BuildData/tmpfs/altsrc/github/grpc/workspace_ruby_native_gem_macos_darwin/tmp/universal-darwin/grpc_c/3.0.0/objs/opt/src/core/ext/upbdefs-generated/xds/annotations/v3/sensitive.upbdefs.dep -c -o /Volumes/BuildData/tmpfs/altsrc/github/grpc/workspace_ruby_native_gem_macos_darwin/tmp/universal-darwin/grpc_c/3.0.0/objs/opt/src/core/ext/upbdefs-generated/xds/annotations/v3/sensitive.upbdefs.o src/core/ext/upbdefs-generated/xds/annotations/v3/sensitive.upbdefs.c
[C] Compiling src/core/ext/upbdefs-generated/xds/annotations/v3/status.upbdefs.c
/bin/sh: /bin/sh: cannot execute binary file
mkdir -p `dirname /Volumes/BuildData/tmpfs/altsrc/github/grpc/workspace_ruby_native_gem_macos_darwin/tmp/universal-darwin/grpc_c/3.0.0/objs/opt/src/core/ext/upbdefs-generated/xds/annotations/v3/status.upbdefs.o`
make: *** [/Volumes/BuildData/tmpfs/altsrc/github/grpc/workspace_ruby_native_gem_macos_darwin/tmp/universal-darwin/grpc_c/3.0.0/objs/opt/src/core/ext/upbdefs-generated/xds/annotations/v3/sensitive.upbdefs.o] Error 126
make: *** Waiting for unfinished jobs....
clang -fdeclspec -Ithird_party/boringssl-with-bazel/src/include -Ithird_party/address_sorting/include -Ithird_party/cares/cares/include -Ithird_party/cares -Ithird_party/cares/cares -DGPR_BACKWARDS_COMPATIBILITY_MODE -DGRPC_XDS_USER_AGENT_NAME_SUFFIX=""\""RUBY\"""" -DGRPC_XDS_USER_AGENT_VERSION_SUFFIX=""\""1.45.0.dev\"""" -g -Wall -Wextra -DOSATOMIC_USE_INLINED=1 -Ithird_party/abseil-cpp -Ithird_party/re2 -Ithird_party/upb -Isrc/core/ext/upb-generated -Isrc/core/ext/upbdefs-generated -Ithird_party/xxhash -O2 -Wframe-larger-than=16384 -fPIC -I. -Iinclude -I/Volumes/BuildData/tmpfs/altsrc/github/grpc/workspace_ruby_native_gem_macos_darwin/tmp/universal-darwin/grpc_c/3.0.0/gens -I/usr/local/include -DNDEBUG -DINSTALL_PREFIX=\""/usr/local\"" -arch i386 -arch x86_64 -Ithird_party/zlib -std=c99 -Wextra-semi -g -MMD -MF /Volumes/BuildData/tmpfs/altsrc/github/grpc/workspace_ruby_native_gem_macos_darwin/tmp/universal-darwin/grpc_c/3.0.0/objs/opt/src/core/ext/upbdefs-generated/xds/annotations/v3/status.upbdefs.dep -c -o /Volumes/BuildData/tmpfs/altsrc/github/grpc/workspace_ruby_native_gem_macos_darwin/tmp/universal-darwin/grpc_c/3.0.0/objs/opt/src/core/ext/upbdefs-generated/xds/annotations/v3/status.upbdefs.o src/core/ext/upbdefs-generated/xds/annotations/v3/status.upbdefs.c
*** ../../../../src/ruby/ext/grpc/extconf.rb failed ***
Could not create Makefile due to some reason, probably lack of necessary
libraries and/or headers. Check the mkmf.log file for more details. You may
need configuration options.
Provided configuration options:
--with-opt-dir
--without-opt-dir
--with-opt-include
--without-opt-include=${opt-dir}/include
--with-opt-lib
--without-opt-lib=${opt-dir}/lib
--with-make-prog
--without-make-prog
--srcdir=../../../../src/ruby/ext/grpc
--curdir
--ruby=/Users/kbuilder/.rake-compiler/ruby/x86_64-darwin11/ruby-3.0.0/bin/$(RUBY_BASE_NAME)
rake aborted!
Command failed with status (1): [/Users/kbuilder/.rvm/rubies/ruby-2.5.0/bin...]
/Volumes/BuildData/tmpfs/altsrc/github/grpc/workspace_ruby_native_gem_macos_darwin/bundle_local_gems/ruby/2.5.0/gems/rake-compiler-1.1.1/lib/rake/extensiontask.rb:206:in `block (2 levels) in define_compile_tasks'
/Volumes/BuildData/tmpfs/altsrc/github/grpc/workspace_ruby_native_gem_macos_darwin/bundle_local_gems/ruby/2.5.0/gems/rake-compiler-1.1.1/lib/rake/extensiontask.rb:203:in `block in define_compile_tasks'
/Volumes/BuildData/tmpfs/altsrc/github/grpc/workspace_ruby_native_gem_macos_darwin/bundle_local_gems/ruby/2.5.0/gems/rake-13.0.6/exe/rake:27:in `'
/Users/kbuilder/.rvm/gems/ruby-2.5.0/bin/bundle:25:in `load'
/Users/kbuilder/.rvm/gems/ruby-2.5.0/bin/bundle:25:in `'
Tasks: TOP => native => native:universal-darwin => native:grpc:universal-darwin => tmp/universal-darwin/stage/src/ruby/lib/grpc/3.0/grpc_c.bundle => copy:grpc_c:universal-darwin:3.0.0 => tmp/universal-darwin/grpc_c/3.0.0/grpc_c.bundle => tmp/universal-darwin/grpc_c/3.0.0/Makefile
(See full trace by running task with --trace)
+ '[' Darwin == Darwin ']'
++ ls 'pkg/*.gem'
++ grep -v darwin
ls: pkg/*.gem: No such file or directory
+ rm
usage: rm [-f | -i] [-dPRrvW] file ...
unlink file
```
https://source.cloud.google.com/results/invocations/a5266702-9573-493b-a946-2f0a779cbd1e/targets/grpc%2Fcore%2Fpull_request%2Fmacos%2Fgrpc_build_artifacts/log",0,ruby macos build flake bin sh bin sh cannot execute binary file compiling src core ext upbdefs generated xds annotations sensitive upbdefs c mkdir p dirname volumes builddata tmpfs altsrc github grpc workspace ruby native gem macos darwin tmp universal darwin grpc c objs opt src core ext upbdefs generated xds annotations sensitive upbdefs o clang fdeclspec ithird party boringssl with bazel src include ithird party address sorting include ithird party cares cares include ithird party cares ithird party cares cares dgpr backwards compatibility mode dgrpc xds user agent name suffix ruby dgrpc xds user agent version suffix dev g wall wextra dosatomic use inlined ithird party abseil cpp ithird party ithird party upb isrc core ext upb generated isrc core ext upbdefs generated ithird party xxhash wframe larger than fpic i iinclude i volumes builddata tmpfs altsrc github grpc workspace ruby native gem macos darwin tmp universal darwin grpc c gens i usr local include dndebug dinstall prefix usr local arch arch ithird party zlib std wextra semi g mmd mf volumes builddata tmpfs altsrc github grpc workspace ruby native gem macos darwin tmp universal darwin grpc c objs opt src core ext upbdefs generated xds annotations sensitive upbdefs dep c o volumes builddata tmpfs altsrc github grpc workspace ruby native gem macos darwin tmp universal darwin grpc c objs opt src core ext upbdefs generated xds annotations sensitive upbdefs o src core ext upbdefs generated xds annotations sensitive upbdefs c compiling src core ext upbdefs generated xds annotations status upbdefs c bin sh bin sh cannot execute binary file mkdir p dirname volumes builddata tmpfs altsrc github grpc workspace ruby native gem macos darwin tmp universal darwin grpc c objs opt src core ext upbdefs generated xds annotations status upbdefs o make error make waiting for unfinished jobs clang fdeclspec ithird party boringssl with bazel src include ithird party address sorting include ithird party cares cares include ithird party cares ithird party cares cares dgpr backwards compatibility mode dgrpc xds user agent name suffix ruby dgrpc xds user agent version suffix dev g wall wextra dosatomic use inlined ithird party abseil cpp ithird party ithird party upb isrc core ext upb generated isrc core ext upbdefs generated ithird party xxhash wframe larger than fpic i iinclude i volumes builddata tmpfs altsrc github grpc workspace ruby native gem macos darwin tmp universal darwin grpc c gens i usr local include dndebug dinstall prefix usr local arch arch ithird party zlib std wextra semi g mmd mf volumes builddata tmpfs altsrc github grpc workspace ruby native gem macos darwin tmp universal darwin grpc c objs opt src core ext upbdefs generated xds annotations status upbdefs dep c o volumes builddata tmpfs altsrc github grpc workspace ruby native gem macos darwin tmp universal darwin grpc c objs opt src core ext upbdefs generated xds annotations status upbdefs o src core ext upbdefs generated xds annotations status upbdefs c src ruby ext grpc extconf rb failed could not create makefile due to some reason probably lack of necessary libraries and or headers check the mkmf log file for more details you may need configuration options provided configuration options with opt dir without opt dir with opt include without opt include opt dir include with opt lib without opt lib opt dir lib with make prog without make prog srcdir src ruby ext grpc curdir ruby users kbuilder rake compiler ruby ruby bin ruby base name rake aborted command failed with status volumes builddata tmpfs altsrc github grpc workspace ruby native gem macos darwin bundle local gems ruby gems rake compiler lib rake extensiontask rb in block levels in define compile tasks volumes builddata tmpfs altsrc github grpc workspace ruby native gem macos darwin bundle local gems ruby gems rake compiler lib rake extensiontask rb in block in define compile tasks volumes builddata tmpfs altsrc github grpc workspace ruby native gem macos darwin bundle local gems ruby gems rake exe rake in users kbuilder rvm gems ruby bin bundle in load users kbuilder rvm gems ruby bin bundle in tasks top native native universal darwin native grpc universal darwin tmp universal darwin stage src ruby lib grpc grpc c bundle copy grpc c universal darwin tmp universal darwin grpc c grpc c bundle tmp universal darwin grpc c makefile see full trace by running task with trace ls pkg gem grep v darwin ls pkg gem no such file or directory rm usage rm file unlink file ,0
338,5557520309.0,IssuesEvent,2017-03-24 12:19:50,DevExpress/testcafe,https://api.github.com/repos/DevExpress/testcafe,closed,NavigateTo command should allow to navigate to about:blank page,AREA: client SYSTEM: automations TYPE: enhancement,"### Are you requesting a feature or reporting a bug?
bug
### What is the current behavior?
error raise during protocol checking
### What is the expected behavior?
should navigate to `about:blank` page",1.0,"NavigateTo command should allow to navigate to about:blank page - ### Are you requesting a feature or reporting a bug?
bug
### What is the current behavior?
error raise during protocol checking
### What is the expected behavior?
should navigate to `about:blank` page",1,navigateto command should allow to navigate to about blank page are you requesting a feature or reporting a bug bug what is the current behavior error raise during protocol checking what is the expected behavior should navigate to about blank page,1
9175,27712374403.0,IssuesEvent,2023-03-14 14:58:41,camunda/camunda-bpm-platform,https://api.github.com/repos/camunda/camunda-bpm-platform,closed,Change a job's due date when I set the retries to > 0 again,version:7.19.0 type:feature component:c7-automation-platform,"This issue was imported from JIRA:
| Field | Value |
| ---------------------------------- | ------------------------------------------------------ |
| JIRA Link | [CAM-14601](https://jira.camunda.com/browse/CAM-14601) |
| Reporter | @toco-cam |
| Has restricted visibility comments | true|
___
**Problem**
As Operations Engineer I want to ""Increment Number of Retries"" for a process, for which the retries are expired. The first of these retries should be executed in a timely manner. Currently, a retry starts with the last timer element defined in the ""retry time cycle"" (technical implementation). If that element is e.g. 1 day (see screenshot), it takes one day until the first retry is executed. With a REST call ""Set due date"" the retry can be triggered instantly. One solution can be to allow ""set due date"" in the ""Increment Number of Retries"" job.
**User Story (Required on creation):**
- (A1) As an operator, I want to be able to choose when the retry starts first for a retry batch job. I want to choose between: Now (Default), Absolut and Legacy. I can look up the details for legacy in the docs.
- (A2) As an operator, I want to be able to choose when the retry starts first for a single job. I want to choose between: Now (Default), Absolut and Legacy. I can look up the details for legacy in the docs.
- (A3) As a developer, I want to be able to define a duedate that overwrites the current due date of the job.

**Functional Requirements (Required before implementation):**
* Allow to ""set due date"" for ""Increment Number of Retries"" batch operations
* If the user does not choose to set a due date, the UI informs them what the default behavior is
* Decide: Should setting a due date also be possible for the non-batch operations for incrementing job retries? (see )
**Breakdown**
Backend
- [x] Add support for due date parameter when incrementing retries for a job or multiple jobs (batch) in Java API. Consider introducing a fluent builder to avoid method duplication. #3060
- [x] #3176
- [x] Add support for due date parameter when incrementing retries for a job or multiple jobs (batch) in REST API #3070
- [x] #3221
Frontend
- [x] #3067
- On the batch operation ""Set retries of Jobs belonging to the process instances"", display a new section:
- Modify the due date
- Display the due date selection form
- In the set retries for single job dialogs, display the due date selection form
Due date selection form:
- ""Absolute"" (radio button): A date picker is shown when this option is selected. The chosen date and time should be used as parameter for the REST call. A question mark icon will display an explanation of the option when hovered over.
- ""Due Date"" (date picker): Only visible when ""Absolute"" is checked. Used to select the date and time which should be used as parameter for the REST call. The current date (now) is preselected.
- ""No change"" (radio button): Selected by default. Do not use the due date parameter for the REST call. A question mark icon will display an explanation of the option when hovered over.
Docs
- [x] REST/Open API docs
- [x] #3167: Add information about this feature to https://docs.camunda.org/manual/latest/user-guide/process-engine/the-job-executor/#retry-time-cycle-configuration
- [x] #3168
- [x] #3194
**Limitations of Scope (Optional):**
**Hints (Optional):**
* See ""set removal time"" batch operation options for setting ""due date""
**Links:**
* is related to https://jira.camunda.com/browse/SUPPORT-13262
",1.0,"Change a job's due date when I set the retries to > 0 again - This issue was imported from JIRA:
| Field | Value |
| ---------------------------------- | ------------------------------------------------------ |
| JIRA Link | [CAM-14601](https://jira.camunda.com/browse/CAM-14601) |
| Reporter | @toco-cam |
| Has restricted visibility comments | true|
___
**Problem**
As Operations Engineer I want to ""Increment Number of Retries"" for a process, for which the retries are expired. The first of these retries should be executed in a timely manner. Currently, a retry starts with the last timer element defined in the ""retry time cycle"" (technical implementation). If that element is e.g. 1 day (see screenshot), it takes one day until the first retry is executed. With a REST call ""Set due date"" the retry can be triggered instantly. One solution can be to allow ""set due date"" in the ""Increment Number of Retries"" job.
**User Story (Required on creation):**
- (A1) As an operator, I want to be able to choose when the retry starts first for a retry batch job. I want to choose between: Now (Default), Absolut and Legacy. I can look up the details for legacy in the docs.
- (A2) As an operator, I want to be able to choose when the retry starts first for a single job. I want to choose between: Now (Default), Absolut and Legacy. I can look up the details for legacy in the docs.
- (A3) As a developer, I want to be able to define a duedate that overwrites the current due date of the job.

**Functional Requirements (Required before implementation):**
* Allow to ""set due date"" for ""Increment Number of Retries"" batch operations
* If the user does not choose to set a due date, the UI informs them what the default behavior is
* Decide: Should setting a due date also be possible for the non-batch operations for incrementing job retries? (see )
**Breakdown**
Backend
- [x] Add support for due date parameter when incrementing retries for a job or multiple jobs (batch) in Java API. Consider introducing a fluent builder to avoid method duplication. #3060
- [x] #3176
- [x] Add support for due date parameter when incrementing retries for a job or multiple jobs (batch) in REST API #3070
- [x] #3221
Frontend
- [x] #3067
- On the batch operation ""Set retries of Jobs belonging to the process instances"", display a new section:
- Modify the due date
- Display the due date selection form
- In the set retries for single job dialogs, display the due date selection form
Due date selection form:
- ""Absolute"" (radio button): A date picker is shown when this option is selected. The chosen date and time should be used as parameter for the REST call. A question mark icon will display an explanation of the option when hovered over.
- ""Due Date"" (date picker): Only visible when ""Absolute"" is checked. Used to select the date and time which should be used as parameter for the REST call. The current date (now) is preselected.
- ""No change"" (radio button): Selected by default. Do not use the due date parameter for the REST call. A question mark icon will display an explanation of the option when hovered over.
Docs
- [x] REST/Open API docs
- [x] #3167: Add information about this feature to https://docs.camunda.org/manual/latest/user-guide/process-engine/the-job-executor/#retry-time-cycle-configuration
- [x] #3168
- [x] #3194
**Limitations of Scope (Optional):**
**Hints (Optional):**
* See ""set removal time"" batch operation options for setting ""due date""
**Links:**
* is related to https://jira.camunda.com/browse/SUPPORT-13262
",1,change a job s due date when i set the retries to again this issue was imported from jira field value jira link reporter toco cam has restricted visibility comments true problem as operations engineer i want to increment number of retries for a process for which the retries are expired the first of these retries should be executed in a timely manner currently a retry starts with the last timer element defined in the retry time cycle technical implementation if that element is e g day see screenshot it takes one day until the first retry is executed with a rest call set due date the retry can be triggered instantly one solution can be to allow set due date in the increment number of retries job user story required on creation as an operator i want to be able to choose when the retry starts first for a retry batch job i want to choose between now default absolut and legacy i can look up the details for legacy in the docs as an operator i want to be able to choose when the retry starts first for a single job i want to choose between now default absolut and legacy i can look up the details for legacy in the docs as a developer i want to be able to define a duedate that overwrites the current due date of the job image png functional requirements required before implementation allow to set due date for increment number of retries batch operations if the user does not choose to set a due date the ui informs them what the default behavior is decide should setting a due date also be possible for the non batch operations for incrementing job retries see breakdown backend add support for due date parameter when incrementing retries for a job or multiple jobs batch in java api consider introducing a fluent builder to avoid method duplication add support for due date parameter when incrementing retries for a job or multiple jobs batch in rest api frontend on the batch operation set retries of jobs belonging to the process instances display a new section modify the due date display the due date selection form in the set retries for single job dialogs display the due date selection form due date selection form absolute radio button a date picker is shown when this option is selected the chosen date and time should be used as parameter for the rest call a question mark icon will display an explanation of the option when hovered over due date date picker only visible when absolute is checked used to select the date and time which should be used as parameter for the rest call the current date now is preselected no change radio button selected by default do not use the due date parameter for the rest call a question mark icon will display an explanation of the option when hovered over docs rest open api docs add information about this feature to limitations of scope optional hints optional see set removal time batch operation options for setting due date links is related to ,1
20341,10720239589.0,IssuesEvent,2019-10-26 16:16:06,becurrie/titandash,https://api.github.com/repos/becurrie/titandash,closed,Background Click/Function Implementation,enhancement help wanted major performance,"This is potentially a major feature that can be added to the bot.
Ideally... Based on the window selected, we need a way to send out clicks to the window in the background...
This would allow for the following major features:
- Run the bot while doing other things.
- Fully support multiple sessions.
We need to investigate how difficult this would be to implement.. We have the HWND of the window, that should be all we need, and some research into how the Win32API Can be used to accomplish this.
Some of the issue we may run into here would come up with either the:
- Mouse drags (unsure how supported this is).
- Emulator restart is going to cause the `hwnd`to be modified. We'll need a way to get the window again on a restart.",True,"Background Click/Function Implementation - This is potentially a major feature that can be added to the bot.
Ideally... Based on the window selected, we need a way to send out clicks to the window in the background...
This would allow for the following major features:
- Run the bot while doing other things.
- Fully support multiple sessions.
We need to investigate how difficult this would be to implement.. We have the HWND of the window, that should be all we need, and some research into how the Win32API Can be used to accomplish this.
Some of the issue we may run into here would come up with either the:
- Mouse drags (unsure how supported this is).
- Emulator restart is going to cause the `hwnd`to be modified. We'll need a way to get the window again on a restart.",0,background click function implementation this is potentially a major feature that can be added to the bot ideally based on the window selected we need a way to send out clicks to the window in the background this would allow for the following major features run the bot while doing other things fully support multiple sessions we need to investigate how difficult this would be to implement we have the hwnd of the window that should be all we need and some research into how the can be used to accomplish this some of the issue we may run into here would come up with either the mouse drags unsure how supported this is emulator restart is going to cause the hwnd to be modified we ll need a way to get the window again on a restart ,0
6539,23379572012.0,IssuesEvent,2022-08-11 08:11:38,home-assistant/core,https://api.github.com/repos/home-assistant/core,closed,Device Automation trigger validation fails if trigger is missing `domain` property,stale integration: device_automation,"### The problem
When creating a trigger, the documentation for device triggers is a bit sparse, it's kind of left up to each integration to provide additional detail.
Unfortunately, the device trigger validation (https://github.com/home-assistant/core/blob/dev/homeassistant/components/device_automation/trigger.py#L65) uses the `domain` property to determine which platform should be used to validate the trigger. If the `domain` property is missing, it will fail with an unhelpful error:
```
homeassistant | 2022-07-03T07:03:00.970072960Z File ""/usr/src/homeassistant/homeassistant/components/device_automation/trigger.py"", line 69, in async_validate_trigger_config
homeassistant | 2022-07-03T07:03:00.970105147Z hass, config[CONF_DOMAIN], DeviceAutomationType.TRIGGER
homeassistant | 2022-07-03T07:03:00.970134730Z KeyError: 'domain'
```
which can be hard to troubleshoot if you don't know that `domain` is required.
Device Automation does define a schema (https://github.com/home-assistant/core/blob/dev/homeassistant/components/device_automation/__init__.py#L49) that defines domain as required, and seems to have an example `TRIGGER_SCHEMA` that extends it, but this doesn't seem to be used anywhere.
### What version of Home Assistant Core has the issue?
2022.6.5
### What was the last working version of Home Assistant Core?
_No response_
### What type of installation are you running?
Home Assistant Container
### Integration causing the issue
Device Automation
### Link to integration documentation on our website
https://www.home-assistant.io/docs/automation/trigger/#device-triggers
### Diagnostics information
_No response_
### Example YAML snippet
_No response_
### Anything in the logs that might be useful for us?
_No response_
### Additional information
I'm happy to provide a fix for this, just interested in verifying in it's expected behavior for some reason first!",1.0,"Device Automation trigger validation fails if trigger is missing `domain` property - ### The problem
When creating a trigger, the documentation for device triggers is a bit sparse, it's kind of left up to each integration to provide additional detail.
Unfortunately, the device trigger validation (https://github.com/home-assistant/core/blob/dev/homeassistant/components/device_automation/trigger.py#L65) uses the `domain` property to determine which platform should be used to validate the trigger. If the `domain` property is missing, it will fail with an unhelpful error:
```
homeassistant | 2022-07-03T07:03:00.970072960Z File ""/usr/src/homeassistant/homeassistant/components/device_automation/trigger.py"", line 69, in async_validate_trigger_config
homeassistant | 2022-07-03T07:03:00.970105147Z hass, config[CONF_DOMAIN], DeviceAutomationType.TRIGGER
homeassistant | 2022-07-03T07:03:00.970134730Z KeyError: 'domain'
```
which can be hard to troubleshoot if you don't know that `domain` is required.
Device Automation does define a schema (https://github.com/home-assistant/core/blob/dev/homeassistant/components/device_automation/__init__.py#L49) that defines domain as required, and seems to have an example `TRIGGER_SCHEMA` that extends it, but this doesn't seem to be used anywhere.
### What version of Home Assistant Core has the issue?
2022.6.5
### What was the last working version of Home Assistant Core?
_No response_
### What type of installation are you running?
Home Assistant Container
### Integration causing the issue
Device Automation
### Link to integration documentation on our website
https://www.home-assistant.io/docs/automation/trigger/#device-triggers
### Diagnostics information
_No response_
### Example YAML snippet
_No response_
### Anything in the logs that might be useful for us?
_No response_
### Additional information
I'm happy to provide a fix for this, just interested in verifying in it's expected behavior for some reason first!",1,device automation trigger validation fails if trigger is missing domain property the problem when creating a trigger the documentation for device triggers is a bit sparse it s kind of left up to each integration to provide additional detail unfortunately the device trigger validation uses the domain property to determine which platform should be used to validate the trigger if the domain property is missing it will fail with an unhelpful error homeassistant file usr src homeassistant homeassistant components device automation trigger py line in async validate trigger config homeassistant hass config deviceautomationtype trigger homeassistant keyerror domain which can be hard to troubleshoot if you don t know that domain is required device automation does define a schema that defines domain as required and seems to have an example trigger schema that extends it but this doesn t seem to be used anywhere what version of home assistant core has the issue what was the last working version of home assistant core no response what type of installation are you running home assistant container integration causing the issue device automation link to integration documentation on our website diagnostics information no response example yaml snippet no response anything in the logs that might be useful for us no response additional information i m happy to provide a fix for this just interested in verifying in it s expected behavior for some reason first ,1
5895,21578734022.0,IssuesEvent,2022-05-02 16:18:52,rancher-sandbox/rancher-desktop,https://api.github.com/repos/rancher-sandbox/rancher-desktop,opened,Incorrect error message when `rdctl start` is run while a session is already running,kind/bug area/automation,"If you run `rdctl start` (may be by mistake), when you have a session running already, the command prints below message which is not correct.
```
rdctl start
Error: set command: no settings to change were given
Usage:
rdctl start [flags]
Flags:
--container-engine string Set engine to containerd or moby (aka docker).
--flannel-enabled Control whether flannel is enabled. Use to disable flannel so you can install your own CNI. (default true)
-h, --help help for start
--kubernetes-enabled Control whether kubernetes runs in the backend.
--kubernetes-version string Choose which version of kubernetes to run.
-p, --path string Path to main executable.
Global Flags:
--config-path string config file (default C:\Users\GunasekharMatamalam\AppData\Roaming\rancher-desktop\rd-engine.json)
--host string default is localhost; most useful for WSL
--password string overrides the password setting in the config file
--port string overrides the port setting in the config file
--user string overrides the user setting in the config file
```",1.0,"Incorrect error message when `rdctl start` is run while a session is already running - If you run `rdctl start` (may be by mistake), when you have a session running already, the command prints below message which is not correct.
```
rdctl start
Error: set command: no settings to change were given
Usage:
rdctl start [flags]
Flags:
--container-engine string Set engine to containerd or moby (aka docker).
--flannel-enabled Control whether flannel is enabled. Use to disable flannel so you can install your own CNI. (default true)
-h, --help help for start
--kubernetes-enabled Control whether kubernetes runs in the backend.
--kubernetes-version string Choose which version of kubernetes to run.
-p, --path string Path to main executable.
Global Flags:
--config-path string config file (default C:\Users\GunasekharMatamalam\AppData\Roaming\rancher-desktop\rd-engine.json)
--host string default is localhost; most useful for WSL
--password string overrides the password setting in the config file
--port string overrides the port setting in the config file
--user string overrides the user setting in the config file
```",1,incorrect error message when rdctl start is run while a session is already running if you run rdctl start may be by mistake when you have a session running already the command prints below message which is not correct rdctl start error set command no settings to change were given usage rdctl start flags container engine string set engine to containerd or moby aka docker flannel enabled control whether flannel is enabled use to disable flannel so you can install your own cni default true h help help for start kubernetes enabled control whether kubernetes runs in the backend kubernetes version string choose which version of kubernetes to run p path string path to main executable global flags config path string config file default c users gunasekharmatamalam appdata roaming rancher desktop rd engine json host string default is localhost most useful for wsl password string overrides the password setting in the config file port string overrides the port setting in the config file user string overrides the user setting in the config file ,1
16027,11802080215.0,IssuesEvent,2020-03-18 20:47:55,spring-projects/spring-batch,https://api.github.com/repos/spring-projects/spring-batch,closed,AbstractCursorItemReader doClose() method is not reentrant [BATCH-2737],has: backports in: infrastructure type: bug,"**[Tommy](https://jira.spring.io/secure/ViewProfile.jspa?name=tommy)** opened **[BATCH-2737](https://jira.spring.io/browse/BATCH-2737?redirect=false)** and commented
The following warning coming up from the `DisposableBeanAdapter`, when it tries to destroy any reader extended from the `AbstractCursorItemReader` by the auto-discovered `close()` method.
`DisposableBeanAdapter : Invocation of destroy method 'close' failed on bean with name 'reader': org.springframework.batch.item.ItemStreamException: Error while closing item reader`
Since the invocation of the `close()` method is already part of the Spring-Batch life-cycle, the `doClose()` method of this class should be reentrant.
The problem lies in the incomplete check around resetting the `autoCommit` state of the underlying connection, which does not respect the already closed connection.
The check should look like something similar
```java
if(this.con != null && !this.conn.isClosed()) {
this.con.setAutoCommit(this.initialConnectionAutoCommit);
}
```
---
**Affects:** 4.0.0
1 votes, 2 watchers
",1.0,"AbstractCursorItemReader doClose() method is not reentrant [BATCH-2737] - **[Tommy](https://jira.spring.io/secure/ViewProfile.jspa?name=tommy)** opened **[BATCH-2737](https://jira.spring.io/browse/BATCH-2737?redirect=false)** and commented
The following warning coming up from the `DisposableBeanAdapter`, when it tries to destroy any reader extended from the `AbstractCursorItemReader` by the auto-discovered `close()` method.
`DisposableBeanAdapter : Invocation of destroy method 'close' failed on bean with name 'reader': org.springframework.batch.item.ItemStreamException: Error while closing item reader`
Since the invocation of the `close()` method is already part of the Spring-Batch life-cycle, the `doClose()` method of this class should be reentrant.
The problem lies in the incomplete check around resetting the `autoCommit` state of the underlying connection, which does not respect the already closed connection.
The check should look like something similar
```java
if(this.con != null && !this.conn.isClosed()) {
this.con.setAutoCommit(this.initialConnectionAutoCommit);
}
```
---
**Affects:** 4.0.0
1 votes, 2 watchers
",0,abstractcursoritemreader doclose method is not reentrant opened and commented the following warning coming up from the disposablebeanadapter when it tries to destroy any reader extended from the abstractcursoritemreader by the auto discovered close method disposablebeanadapter invocation of destroy method close failed on bean with name reader org springframework batch item itemstreamexception error while closing item reader since the invocation of the close method is already part of the spring batch life cycle the doclose method of this class should be reentrant the problem lies in the incomplete check around resetting the autocommit state of the underlying connection which does not respect the already closed connection the check should look like something similar java if this con null this conn isclosed this con setautocommit this initialconnectionautocommit affects votes watchers ,0
340764,30541302326.0,IssuesEvent,2023-07-19 21:41:07,gotsiridzes/mit-08-final,https://api.github.com/repos/gotsiridzes/mit-08-final,opened,bf4d790 failed unit and formatting tests.,ci-black ci-pytest,"CI failed on commit: bf4d790f8ecd9cbd2d6e0637a3eec3ad0142279a
**Author:** tian.zhang@triflesoft.org
**Pytest Report:** https://gotsiridzes.github.io/mit-08-final-report/bf4d790f8ecd9cbd2d6e0637a3eec3ad0142279a-1689802563/pytest_report.html
First commit that introduced pytest's failure: a3c625c52821a22b3ca0179c19b90abdfddbd5f1
**Black Report:** https://gotsiridzes.github.io/mit-08-final-report/bf4d790f8ecd9cbd2d6e0637a3eec3ad0142279a-1689802563/black_report.html
First commit that introduced black's failure: a3c625c52821a22b3ca0179c19b90abdfddbd5f1
",1.0,"bf4d790 failed unit and formatting tests. - CI failed on commit: bf4d790f8ecd9cbd2d6e0637a3eec3ad0142279a
**Author:** tian.zhang@triflesoft.org
**Pytest Report:** https://gotsiridzes.github.io/mit-08-final-report/bf4d790f8ecd9cbd2d6e0637a3eec3ad0142279a-1689802563/pytest_report.html
First commit that introduced pytest's failure: a3c625c52821a22b3ca0179c19b90abdfddbd5f1
**Black Report:** https://gotsiridzes.github.io/mit-08-final-report/bf4d790f8ecd9cbd2d6e0637a3eec3ad0142279a-1689802563/black_report.html
First commit that introduced black's failure: a3c625c52821a22b3ca0179c19b90abdfddbd5f1
",0, failed unit and formatting tests ci failed on commit author tian zhang triflesoft org pytest report first commit that introduced pytest s failure black report first commit that introduced black s failure ,0
300632,25982669104.0,IssuesEvent,2022-12-19 20:22:16,department-of-veterans-affairs/va.gov-cms,https://api.github.com/repos/department-of-veterans-affairs/va.gov-cms,opened,Scrape module release page for info and post in a comment on Dependabot PRs.,Automated testing ⭐️ Sitewide CMS Quality Assurance,"## Description
I've done this once and I'll do it again.
## Acceptance Criteria
- [ ] Testable_Outcome_X
- [ ] Testable_Outcome_Y
- [ ] Testable_Outcome_Z
- [ ] Requires design review
",1.0,"Scrape module release page for info and post in a comment on Dependabot PRs. - ## Description
I've done this once and I'll do it again.
## Acceptance Criteria
- [ ] Testable_Outcome_X
- [ ] Testable_Outcome_Y
- [ ] Testable_Outcome_Z
- [ ] Requires design review
",0,scrape module release page for info and post in a comment on dependabot prs description i ve done this once and i ll do it again acceptance criteria testable outcome x testable outcome y testable outcome z requires design review ,0
1584,10361005168.0,IssuesEvent,2019-09-06 09:01:19,elastic/metricbeat-tests-poc,https://api.github.com/repos/elastic/metricbeat-tests-poc,opened,Represent the state of the running services in a standard file,automation,"We will help teams to understand what is running and where
Document how to read services on each language",1.0,"Represent the state of the running services in a standard file - We will help teams to understand what is running and where
Document how to read services on each language",1,represent the state of the running services in a standard file we will help teams to understand what is running and where document how to read services on each language,1
1370,9991308380.0,IssuesEvent,2019-07-11 10:47:08,mozilla-mobile/android-components,https://api.github.com/repos/mozilla-mobile/android-components,closed,Unexpected failure during lint analysis of module-info.class,🤖 automation,"Lint for `support-test` is failing in the 0.38.0 release task:
https://tools.taskcluster.net/groups/UJ-v477WQZq5QY2gtNQ7Xw/tasks/U5fFFP4nTAiHckdK_fq8-Q/runs/3/logs/public%2Flogs%2Flive.log
We re-run the task multiple times but it always fails with that error. The same commit has passed on master without any issues.",1.0,"Unexpected failure during lint analysis of module-info.class - Lint for `support-test` is failing in the 0.38.0 release task:
https://tools.taskcluster.net/groups/UJ-v477WQZq5QY2gtNQ7Xw/tasks/U5fFFP4nTAiHckdK_fq8-Q/runs/3/logs/public%2Flogs%2Flive.log
We re-run the task multiple times but it always fails with that error. The same commit has passed on master without any issues.",1,unexpected failure during lint analysis of module info class lint for support test is failing in the release task we re run the task multiple times but it always fails with that error the same commit has passed on master without any issues ,1
139501,5377000094.0,IssuesEvent,2017-02-23 10:45:51,datproject/dat-desktop,https://api.github.com/repos/datproject/dat-desktop,closed,generate brew install script,Priority: Low Status: Proposal Type: Enhancement,"stumbled on [pup](https://github.com/ericchiang/pup) which has a clever trick to do homebrew installs:
```sh
brew install https://raw.githubusercontent.com/EricChiang/pup/master/pup.rb
```
resolves to:
```rb
# This file was generated by release.sh
require 'formula'
class Pup < Formula
homepage 'https://github.com/ericchiang/pup'
version '0.4.0'
if Hardware.is_64_bit?
url 'https://github.com/ericchiang/pup/releases/download/v0.4.0/pup_v0.4.0_darwin_amd64.zip'
sha256 'c539a697efee2f8e56614a54cb3b215338e00de1f6a7c2fa93144ab6e1db8ebe'
else
url 'https://github.com/ericchiang/pup/releases/download/v0.4.0/pup_v0.4.0_darwin_386.zip'
sha256 '75c27caa0008a9cc639beb7506077ad9f32facbffcc4e815e999eaf9588a527e'
end
def install
bin.install 'pup'
end
end
```
Which in turn we can use for `dat-desktop`. Think this is pretty cool; perhaps we could even leverage this to bundle the `dat` command. For all those folks that for whatever reason can't pull node onto their system first",1.0,"generate brew install script - stumbled on [pup](https://github.com/ericchiang/pup) which has a clever trick to do homebrew installs:
```sh
brew install https://raw.githubusercontent.com/EricChiang/pup/master/pup.rb
```
resolves to:
```rb
# This file was generated by release.sh
require 'formula'
class Pup < Formula
homepage 'https://github.com/ericchiang/pup'
version '0.4.0'
if Hardware.is_64_bit?
url 'https://github.com/ericchiang/pup/releases/download/v0.4.0/pup_v0.4.0_darwin_amd64.zip'
sha256 'c539a697efee2f8e56614a54cb3b215338e00de1f6a7c2fa93144ab6e1db8ebe'
else
url 'https://github.com/ericchiang/pup/releases/download/v0.4.0/pup_v0.4.0_darwin_386.zip'
sha256 '75c27caa0008a9cc639beb7506077ad9f32facbffcc4e815e999eaf9588a527e'
end
def install
bin.install 'pup'
end
end
```
Which in turn we can use for `dat-desktop`. Think this is pretty cool; perhaps we could even leverage this to bundle the `dat` command. For all those folks that for whatever reason can't pull node onto their system first",0,generate brew install script stumbled on which has a clever trick to do homebrew installs sh brew install resolves to rb this file was generated by release sh require formula class pup formula homepage version if hardware is bit url else url end def install bin install pup end end which in turn we can use for dat desktop think this is pretty cool perhaps we could even leverage this to bundle the dat command for all those folks that for whatever reason can t pull node onto their system first,0
7878,19761013736.0,IssuesEvent,2022-01-16 12:17:30,graphhopper/graphhopper,https://api.github.com/repos/graphhopper/graphhopper,closed,Check if moving TestAlgoCollector into test package of osm module is possible,improvement architecture,Then we could also use the TranslationMapTest.SINGLETON instead of the newly created instance.,1.0,Check if moving TestAlgoCollector into test package of osm module is possible - Then we could also use the TranslationMapTest.SINGLETON instead of the newly created instance.,0,check if moving testalgocollector into test package of osm module is possible then we could also use the translationmaptest singleton instead of the newly created instance ,0
2594,12323523035.0,IssuesEvent,2020-05-13 12:19:35,coolOrangeLabs/powerGateTemplate,https://api.github.com/repos/coolOrangeLabs/powerGateTemplate,closed,Merge powerGate Teamplate with other customizations,Automation,"## Questions: Existing other customizations
+ What other customizations are present on the environment of the customer?
+ Reseller customizations? What kind of?
+ Other 3rd party tools?
## coolOrange Tasks
If there are Data standard customizations then we need to accomplish the tasks below.
### Inventor
+ [ ] Get the following customized files from the customer:
+ [ ] `Inventor.xaml`
+ [ ] Merge the customized `Inventor.xaml` with the Inventor.xaml from the powerGate Template
### Vault
+ [ ] Get the following customized files from the customer:
+ [ ] `Default.ps1` where the powershell function `OnTabContextChanged` is overriden
+ [ ] Merge the customized `OnTabContextChanged` with the code from the powerGate Template",1.0,"Merge powerGate Teamplate with other customizations - ## Questions: Existing other customizations
+ What other customizations are present on the environment of the customer?
+ Reseller customizations? What kind of?
+ Other 3rd party tools?
## coolOrange Tasks
If there are Data standard customizations then we need to accomplish the tasks below.
### Inventor
+ [ ] Get the following customized files from the customer:
+ [ ] `Inventor.xaml`
+ [ ] Merge the customized `Inventor.xaml` with the Inventor.xaml from the powerGate Template
### Vault
+ [ ] Get the following customized files from the customer:
+ [ ] `Default.ps1` where the powershell function `OnTabContextChanged` is overriden
+ [ ] Merge the customized `OnTabContextChanged` with the code from the powerGate Template",1,merge powergate teamplate with other customizations questions existing other customizations what other customizations are present on the environment of the customer reseller customizations what kind of other party tools coolorange tasks if there are data standard customizations then we need to accomplish the tasks below inventor get the following customized files from the customer inventor xaml merge the customized inventor xaml with the inventor xaml from the powergate template vault get the following customized files from the customer default where the powershell function ontabcontextchanged is overriden merge the customized ontabcontextchanged with the code from the powergate template,1
79412,28182535580.0,IssuesEvent,2023-04-04 04:34:37,apache/jmeter,https://api.github.com/repos/apache/jmeter,opened,want to run multiple user sequentially,defect to-triage,"### Expected behavior
i have 800 user and i want to run that the next request is sent only after the prior request is completed?
### Actual behavior
they run with same timing
### Steps to reproduce the problem
create a multiple thread
add http request
csv file
add listner
### JMeter Version
5.5
### Java Version
_No response_
### OS Version
_No response_",1.0,"want to run multiple user sequentially - ### Expected behavior
i have 800 user and i want to run that the next request is sent only after the prior request is completed?
### Actual behavior
they run with same timing
### Steps to reproduce the problem
create a multiple thread
add http request
csv file
add listner
### JMeter Version
5.5
### Java Version
_No response_
### OS Version
_No response_",0,want to run multiple user sequentially expected behavior i have user and i want to run that the next request is sent only after the prior request is completed actual behavior they run with same timing steps to reproduce the problem create a multiple thread add http request csv file add listner jmeter version java version no response os version no response ,0
342711,24754456202.0,IssuesEvent,2022-10-21 16:19:59,department-of-veterans-affairs/va.gov-team,https://api.github.com/repos/department-of-veterans-affairs/va.gov-team,closed,[Application Hosting and Deployment] Operational documentation,Epic operations documentation infrastructure eks,"## Product Outline
[Application Hosting and Deployment using Container Orchestration (EKS)](https://vfs.atlassian.net/wiki/spaces/OT/pages/1474593866/Application+Hosting+and+Deployment+using+Container+Orchestration+EKS)
## High-Level User Story/ies
As an operator of the VA.Gov Platform, I need to understand the tools and processes involved in supporting the platform's application hosting and deployment system.
## Hypothesis or Bet
If we provide accurate documentation, operators will know how to manage the application hosting and deployment system.
## Definition of done
### What must be true in order for you to consider this epic complete?
There are diagrams that depict the following...
- Cluster topology
- worker nodes + auto-scaling group
- subnets / CNI
- AWS resources that together make up the cluster
- Service topology (1 for deployment cluster, 1 for tooling/utility cluster)
- Traefik / Ingress
- Datadog agents
- Cert manager
- External DNS
- External Secrets
- Metrics server
- Cluster auto-scaler
- Automation flow throughout the platform
There is operational documentation that explains how...
- to troubleshoot application deployments
- to do necessary maintenance on the ArgoCD, EKS, etc.
- build
- deploy
- upgrade
- teardown",1.0,"[Application Hosting and Deployment] Operational documentation - ## Product Outline
[Application Hosting and Deployment using Container Orchestration (EKS)](https://vfs.atlassian.net/wiki/spaces/OT/pages/1474593866/Application+Hosting+and+Deployment+using+Container+Orchestration+EKS)
## High-Level User Story/ies
As an operator of the VA.Gov Platform, I need to understand the tools and processes involved in supporting the platform's application hosting and deployment system.
## Hypothesis or Bet
If we provide accurate documentation, operators will know how to manage the application hosting and deployment system.
## Definition of done
### What must be true in order for you to consider this epic complete?
There are diagrams that depict the following...
- Cluster topology
- worker nodes + auto-scaling group
- subnets / CNI
- AWS resources that together make up the cluster
- Service topology (1 for deployment cluster, 1 for tooling/utility cluster)
- Traefik / Ingress
- Datadog agents
- Cert manager
- External DNS
- External Secrets
- Metrics server
- Cluster auto-scaler
- Automation flow throughout the platform
There is operational documentation that explains how...
- to troubleshoot application deployments
- to do necessary maintenance on the ArgoCD, EKS, etc.
- build
- deploy
- upgrade
- teardown",0, operational documentation product outline high level user story ies as an operator of the va gov platform i need to understand the tools and processes involved in supporting the platform s application hosting and deployment system hypothesis or bet if we provide accurate documentation operators will know how to manage the application hosting and deployment system definition of done what must be true in order for you to consider this epic complete there are diagrams that depict the following cluster topology worker nodes auto scaling group subnets cni aws resources that together make up the cluster service topology for deployment cluster for tooling utility cluster traefik ingress datadog agents cert manager external dns external secrets metrics server cluster auto scaler automation flow throughout the platform there is operational documentation that explains how to troubleshoot application deployments to do necessary maintenance on the argocd eks etc build deploy upgrade teardown,0
590626,17782960923.0,IssuesEvent,2021-08-31 07:41:40,teamforus/general,https://api.github.com/repos/teamforus/general,closed,Geertruidenberg footer has WCAG mistakes,Priority: Must have Urgency: Medium Client: Geertruidenberg,"Learn more about change requests here: https://bit.ly/39CWeEE
### Requested by:
-
### Change description

wcag.nl/quickscan/aNoyLjxtMkYkdxiMvR4K",1.0,"Geertruidenberg footer has WCAG mistakes - Learn more about change requests here: https://bit.ly/39CWeEE
### Requested by:
-
### Change description

wcag.nl/quickscan/aNoyLjxtMkYkdxiMvR4K",0,geertruidenberg footer has wcag mistakes learn more about change requests here requested by change description wcag nl quickscan ,0
6181,22366462828.0,IssuesEvent,2022-06-16 04:58:59,harvester/harvester,https://api.github.com/repos/harvester/harvester,closed,[FEATURE] Add Harvester backport issue bot,enhancement priority/2 area/automation,"**Is your feature request related to a problem? Please describe.**
Add Harvester bot to auto crate backport issues based on the backport label.
**Describe the solution you'd like**
- Title: [Backport v1.x] copy-the-title.
- Description: backport the issue #link-id
- Copy assignees and all labels except the `backport-needed` and add the `not-require/test-plan` label.
- Move the issue to the associated milestone and release.
**Describe alternatives you've considered**
**Additional context**
",1.0,"[FEATURE] Add Harvester backport issue bot - **Is your feature request related to a problem? Please describe.**
Add Harvester bot to auto crate backport issues based on the backport label.
**Describe the solution you'd like**
- Title: [Backport v1.x] copy-the-title.
- Description: backport the issue #link-id
- Copy assignees and all labels except the `backport-needed` and add the `not-require/test-plan` label.
- Move the issue to the associated milestone and release.
**Describe alternatives you've considered**
**Additional context**
",1, add harvester backport issue bot is your feature request related to a problem please describe add harvester bot to auto crate backport issues based on the backport label describe the solution you d like title copy the title description backport the issue link id copy assignees and all labels except the backport needed and add the not require test plan label move the issue to the associated milestone and release describe alternatives you ve considered additional context ,1
199,4567290480.0,IssuesEvent,2016-09-15 10:31:56,MISP/MISP,https://api.github.com/repos/MISP/MISP,closed,REST API - Get openIOC output,automation import/export,"Might be needed for integration with [openioc_scan (volatility plugin)](https://github.com/TakahiroHaruyama/openioc_scan), see https://github.com/TakahiroHaruyama/openioc_scan/issues/2
Both for individual events, and a global one.",1.0,"REST API - Get openIOC output - Might be needed for integration with [openioc_scan (volatility plugin)](https://github.com/TakahiroHaruyama/openioc_scan), see https://github.com/TakahiroHaruyama/openioc_scan/issues/2
Both for individual events, and a global one.",1,rest api get openioc output might be needed for integration with see both for individual events and a global one ,1
51090,13188098770.0,IssuesEvent,2020-08-13 05:33:08,icecube-trac/tix3,https://api.github.com/repos/icecube-trac/tix3,closed,[MuonGun] Surfaces refactor broke deserialization of pre-IceSim5 S frames (Trac #1956),Migrated from Trac combo core defect,"Trying to deserialize an S frame written with IceSim 4 with current software fails with
```text
FATAL (phys-services): Version 117 is from the future (SamplingSurface.cxx:50 in void I3Surfaces::SamplingSurface::serialize(Archive&, unsigned int) [with Archive = icecube::archive::portable_binary_iarchive])
```
This is probably because the refactor added a new layer in the inheritance tree, the current code tries to read a class ID and version from the stream that are not there. While empty base classes do not take up space in memory, they turn out to matter quite a bit for serialization.
Migrated from https://code.icecube.wisc.edu/ticket/1956, reported by jvansanten and owned by
```json
{
""status"": ""closed"",
""changetime"": ""2017-03-14T20:23:10"",
""description"": ""Trying to deserialize an S frame written with IceSim 4 with current software fails with \n{{{\nFATAL (phys-services): Version 117 is from the future (SamplingSurface.cxx:50 in void I3Surfaces::SamplingSurface::serialize(Archive&, unsigned int) [with Archive = icecube::archive::portable_binary_iarchive])\n}}}\n\nThis is probably because the refactor added a new layer in the inheritance tree, the current code tries to read a class ID and version from the stream that are not there. While empty base classes do not take up space in memory, they turn out to matter quite a bit for serialization."",
""reporter"": ""jvansanten"",
""cc"": """",
""resolution"": ""invalid"",
""_ts"": ""1489522990898099"",
""component"": ""combo core"",
""summary"": ""[MuonGun] Surfaces refactor broke deserialization of pre-IceSim5 S frames"",
""priority"": ""critical"",
""keywords"": """",
""time"": ""2017-03-14T15:16:30"",
""milestone"": """",
""owner"": """",
""type"": ""defect""
}
```
",1.0,"[MuonGun] Surfaces refactor broke deserialization of pre-IceSim5 S frames (Trac #1956) - Trying to deserialize an S frame written with IceSim 4 with current software fails with
```text
FATAL (phys-services): Version 117 is from the future (SamplingSurface.cxx:50 in void I3Surfaces::SamplingSurface::serialize(Archive&, unsigned int) [with Archive = icecube::archive::portable_binary_iarchive])
```
This is probably because the refactor added a new layer in the inheritance tree, the current code tries to read a class ID and version from the stream that are not there. While empty base classes do not take up space in memory, they turn out to matter quite a bit for serialization.
Migrated from https://code.icecube.wisc.edu/ticket/1956, reported by jvansanten and owned by
```json
{
""status"": ""closed"",
""changetime"": ""2017-03-14T20:23:10"",
""description"": ""Trying to deserialize an S frame written with IceSim 4 with current software fails with \n{{{\nFATAL (phys-services): Version 117 is from the future (SamplingSurface.cxx:50 in void I3Surfaces::SamplingSurface::serialize(Archive&, unsigned int) [with Archive = icecube::archive::portable_binary_iarchive])\n}}}\n\nThis is probably because the refactor added a new layer in the inheritance tree, the current code tries to read a class ID and version from the stream that are not there. While empty base classes do not take up space in memory, they turn out to matter quite a bit for serialization."",
""reporter"": ""jvansanten"",
""cc"": """",
""resolution"": ""invalid"",
""_ts"": ""1489522990898099"",
""component"": ""combo core"",
""summary"": ""[MuonGun] Surfaces refactor broke deserialization of pre-IceSim5 S frames"",
""priority"": ""critical"",
""keywords"": """",
""time"": ""2017-03-14T15:16:30"",
""milestone"": """",
""owner"": """",
""type"": ""defect""
}
```
",0, surfaces refactor broke deserialization of pre s frames trac trying to deserialize an s frame written with icesim with current software fails with text fatal phys services version is from the future samplingsurface cxx in void samplingsurface serialize archive unsigned int this is probably because the refactor added a new layer in the inheritance tree the current code tries to read a class id and version from the stream that are not there while empty base classes do not take up space in memory they turn out to matter quite a bit for serialization migrated from json status closed changetime description trying to deserialize an s frame written with icesim with current software fails with n nfatal phys services version is from the future samplingsurface cxx in void samplingsurface serialize archive unsigned int n n nthis is probably because the refactor added a new layer in the inheritance tree the current code tries to read a class id and version from the stream that are not there while empty base classes do not take up space in memory they turn out to matter quite a bit for serialization reporter jvansanten cc resolution invalid ts component combo core summary surfaces refactor broke deserialization of pre s frames priority critical keywords time milestone owner type defect ,0
20295,29517890346.0,IssuesEvent,2023-06-04 18:00:06,SodiumZH/Days-with-Monster-Girls,https://api.github.com/repos/SodiumZH/Days-with-Monster-Girls,closed,Mod does now Allow you to use Quantum Catcher from Forbidden and Arcanus,compatibility,"
Every time I try to capture the tamed mob to bring somewhere else all I get is a armor placement opening up, or the mob is following or staying text, there needs to be a tool assigned to causing the tamed monster to follow or not, not the hand, as it interferes with other mods",True,"Mod does now Allow you to use Quantum Catcher from Forbidden and Arcanus -
Every time I try to capture the tamed mob to bring somewhere else all I get is a armor placement opening up, or the mob is following or staying text, there needs to be a tool assigned to causing the tamed monster to follow or not, not the hand, as it interferes with other mods",0,mod does now allow you to use quantum catcher from forbidden and arcanus img width alt image src every time i try to capture the tamed mob to bring somewhere else all i get is a armor placement opening up or the mob is following or staying text there needs to be a tool assigned to causing the tamed monster to follow or not not the hand as it interferes with other mods,0
220320,16920598199.0,IssuesEvent,2021-06-25 04:41:33,old-rookies/tech-demo-client,https://api.github.com/repos/old-rookies/tech-demo-client,opened,컴포넌트 안의 이벤트,documentation,"https://github.com/old-rookies/tech-demo-client/blob/825a16b6979b8ba3a092954dce37434ead63ffa6/src/scenes/Games/GameScene/index.tsx#L37
컴포넌트 안에서 addEventListener를 사용하게 되면, 컴포넌트가 재 랜더링 될때마다 새로 이벤트 리스닝이 등록될 수 있습니당.
예를들어 만약 해당 파일이 state를 갖게된다면, event가 destroy되지않고 새로 이벤트 함수가 등록되게 되어서
한 이벤트에 대해 처리가 2번씩 일어나는 경우가 생길 수도있는것이져. 그리고 씬이 변경되더라도, window 객체가 새로 reload 되지 않을 수 있기때문에, 이벤트 리스너는 그대로 등록되어 있을수도 있습니당. 씬은 없는데 그 이벤트 리스너는 객체 안에 남아있을 수도 있는 것이져.
그렇기 때문에 만약에 이벤트를 컴포넌트 안에서 등록해야한다면,
```tsx
const evtFn = ()=>console.log('some event fired');
useEffect(()=>{
window.addEventListener('EVENT_NAME' ,evtFn);
return ()=>{
window.removeEventListener('EVENT_NAME' , evtFn);
}
},[]);
```
아래와 같이 해주어야 새로 랜더가 되거나, 데이터가 갱신되어 페이지가 re-render되더라도
이벤트가 중첩 실행 혹은 남아있는 경우를 피할 수 있습니당.
그냥 그렇다굽쇼 ",1.0,"컴포넌트 안의 이벤트 - https://github.com/old-rookies/tech-demo-client/blob/825a16b6979b8ba3a092954dce37434ead63ffa6/src/scenes/Games/GameScene/index.tsx#L37
컴포넌트 안에서 addEventListener를 사용하게 되면, 컴포넌트가 재 랜더링 될때마다 새로 이벤트 리스닝이 등록될 수 있습니당.
예를들어 만약 해당 파일이 state를 갖게된다면, event가 destroy되지않고 새로 이벤트 함수가 등록되게 되어서
한 이벤트에 대해 처리가 2번씩 일어나는 경우가 생길 수도있는것이져. 그리고 씬이 변경되더라도, window 객체가 새로 reload 되지 않을 수 있기때문에, 이벤트 리스너는 그대로 등록되어 있을수도 있습니당. 씬은 없는데 그 이벤트 리스너는 객체 안에 남아있을 수도 있는 것이져.
그렇기 때문에 만약에 이벤트를 컴포넌트 안에서 등록해야한다면,
```tsx
const evtFn = ()=>console.log('some event fired');
useEffect(()=>{
window.addEventListener('EVENT_NAME' ,evtFn);
return ()=>{
window.removeEventListener('EVENT_NAME' , evtFn);
}
},[]);
```
아래와 같이 해주어야 새로 랜더가 되거나, 데이터가 갱신되어 페이지가 re-render되더라도
이벤트가 중첩 실행 혹은 남아있는 경우를 피할 수 있습니당.
그냥 그렇다굽쇼 ",0,컴포넌트 안의 이벤트 컴포넌트 안에서 addeventlistener를 사용하게 되면 컴포넌트가 재 랜더링 될때마다 새로 이벤트 리스닝이 등록될 수 있습니당 예를들어 만약 해당 파일이 state를 갖게된다면 event가 destroy되지않고 새로 이벤트 함수가 등록되게 되어서 한 이벤트에 대해 처리가 일어나는 경우가 생길 수도있는것이져 그리고 씬이 변경되더라도 window 객체가 새로 reload 되지 않을 수 있기때문에 이벤트 리스너는 그대로 등록되어 있을수도 있습니당 씬은 없는데 그 이벤트 리스너는 객체 안에 남아있을 수도 있는 것이져 그렇기 때문에 만약에 이벤트를 컴포넌트 안에서 등록해야한다면 tsx const evtfn console log some event fired useeffect window addeventlistener event name evtfn return window removeeventlistener event name evtfn 아래와 같이 해주어야 새로 랜더가 되거나 데이터가 갱신되어 페이지가 re render되더라도 이벤트가 중첩 실행 혹은 남아있는 경우를 피할 수 있습니당 그냥 그렇다굽쇼 ,0
9839,30621318059.0,IssuesEvent,2023-07-24 08:28:26,zaproxy/zaproxy,https://api.github.com/repos/zaproxy/zaproxy,closed,maxAlertsPerRule for activeScan job,enhancement add-on in:automation,"### Is your feature request related to a problem? Please describe.
Because you can not set maxAlertsPerRule in activeScan, the reports are very bloated.
### Describe the solution you'd like
I would like to have a possibilty to configure maxAlertsPerRule also for the activeScan like it is possible for passiveScan
### Describe alternatives you've considered
unfortunately there is no alternative
### Screenshots
_No response_
### Additional context
_No response_
### Would you like to help fix this issue?
- [ ] Yes",1.0,"maxAlertsPerRule for activeScan job - ### Is your feature request related to a problem? Please describe.
Because you can not set maxAlertsPerRule in activeScan, the reports are very bloated.
### Describe the solution you'd like
I would like to have a possibilty to configure maxAlertsPerRule also for the activeScan like it is possible for passiveScan
### Describe alternatives you've considered
unfortunately there is no alternative
### Screenshots
_No response_
### Additional context
_No response_
### Would you like to help fix this issue?
- [ ] Yes",1,maxalertsperrule for activescan job is your feature request related to a problem please describe because you can not set maxalertsperrule in activescan the reports are very bloated describe the solution you d like i would like to have a possibilty to configure maxalertsperrule also for the activescan like it is possible for passivescan describe alternatives you ve considered unfortunately there is no alternative screenshots no response additional context no response would you like to help fix this issue yes,1
8761,27172219204.0,IssuesEvent,2023-02-17 20:33:54,OneDrive/onedrive-api-docs,https://api.github.com/repos/OneDrive/onedrive-api-docs,closed,What is the best way to specify fileSystemInfo.createdDateTime on Linux?,type:question automation:Closed,"#### Category
- [x] Question
- [ ] Documentation issue
- [ ] Bug
[GNU stat](https://www.gnu.org/software/coreutils/manual/html_node/stat-invocation.html) has 'birth time'. but it's not standard (https://unix.stackexchange.com/a/67895)
What should I set for fileSystemInfo.createdDateTime?
",1.0,"What is the best way to specify fileSystemInfo.createdDateTime on Linux? - #### Category
- [x] Question
- [ ] Documentation issue
- [ ] Bug
[GNU stat](https://www.gnu.org/software/coreutils/manual/html_node/stat-invocation.html) has 'birth time'. but it's not standard (https://unix.stackexchange.com/a/67895)
What should I set for fileSystemInfo.createdDateTime?
",1,what is the best way to specify filesysteminfo createddatetime on linux category question documentation issue bug has birth time but it s not standard what should i set for filesysteminfo createddatetime ,1
424136,12306281436.0,IssuesEvent,2020-05-12 00:58:30,LLNL/PyDV,https://api.github.com/repos/LLNL/PyDV,opened,Add filter command,Low Priority enhancement,"Procedure: Remove points from the curves that fail the specified domain predicate or range predicate. The predicates must be procedures that return true or false when applied to elements of a domain or range.
Usage: filter curve-list domain-predicate range-predicate",1.0,"Add filter command - Procedure: Remove points from the curves that fail the specified domain predicate or range predicate. The predicates must be procedures that return true or false when applied to elements of a domain or range.
Usage: filter curve-list domain-predicate range-predicate",0,add filter command procedure remove points from the curves that fail the specified domain predicate or range predicate the predicates must be procedures that return true or false when applied to elements of a domain or range usage filter curve list domain predicate range predicate,0
5560,20103602512.0,IssuesEvent,2022-02-07 08:15:36,pingcap/tidb,https://api.github.com/repos/pingcap/tidb,closed,"br: after br restore, tikv used storage is not balance ",type/bug severity/major component/br found/automation affects-5.3 affects-5.4,"## Bug Report
Please answer these questions before submitting your issue. Thanks!
### 1. Minimal reproduce step (Required)
run oltp_fun_001
### 2. What did you expect to see? (Required)
Restore finished at 18:26. Tikv used should be balance in all nodes.
### 3. What did you see instead (Required)

### 4. What is your TiDB version? (Required)
/ # /br -V
Release Version: v5.4.0-nightly
Git Commit Hash: 76aae0d5c594f538af62caa883c73188a44170c4
Git Branch: heads/refs/tags/v5.4.0-nightly
Go Version: go1.16.4
UTC Build Time: 2021-12-26 08:07:37
Race Enabled: false
/ # /tidb-server -V
Release Version: v5.4.0-nightly
Edition: Community
Git Commit Hash: 76aae0d5c594f538af62caa883c73188a44170c4
Git Branch: heads/refs/tags/v5.4.0-nightly
UTC Build Time: 2021-12-26 08:09:11
GoVersion: go1.16.4
Race Enabled: false
TiKV Min Version: v3.0.0-60965b006877ca7234adaced7890d7b029ed1306
Check Table Before Drop: false
Logs and monitor can be get from minio using following testbed name.
endless-oltp--tps-542284-1-875",1.0,"br: after br restore, tikv used storage is not balance - ## Bug Report
Please answer these questions before submitting your issue. Thanks!
### 1. Minimal reproduce step (Required)
run oltp_fun_001
### 2. What did you expect to see? (Required)
Restore finished at 18:26. Tikv used should be balance in all nodes.
### 3. What did you see instead (Required)

### 4. What is your TiDB version? (Required)
/ # /br -V
Release Version: v5.4.0-nightly
Git Commit Hash: 76aae0d5c594f538af62caa883c73188a44170c4
Git Branch: heads/refs/tags/v5.4.0-nightly
Go Version: go1.16.4
UTC Build Time: 2021-12-26 08:07:37
Race Enabled: false
/ # /tidb-server -V
Release Version: v5.4.0-nightly
Edition: Community
Git Commit Hash: 76aae0d5c594f538af62caa883c73188a44170c4
Git Branch: heads/refs/tags/v5.4.0-nightly
UTC Build Time: 2021-12-26 08:09:11
GoVersion: go1.16.4
Race Enabled: false
TiKV Min Version: v3.0.0-60965b006877ca7234adaced7890d7b029ed1306
Check Table Before Drop: false
Logs and monitor can be get from minio using following testbed name.
endless-oltp--tps-542284-1-875",1,br after br restore tikv used storage is not balance bug report please answer these questions before submitting your issue thanks minimal reproduce step required run oltp fun what did you expect to see required restore finished at tikv used should be balance in all nodes what did you see instead required what is your tidb version required br v release version nightly git commit hash git branch heads refs tags nightly go version utc build time race enabled false tidb server v release version nightly edition community git commit hash git branch heads refs tags nightly utc build time goversion race enabled false tikv min version check table before drop false logs and monitor can be get from minio using following testbed name endless oltp tps ,1
20993,16396613538.0,IssuesEvent,2021-05-18 01:11:59,mkrumholz/relational_rails,https://api.github.com/repos/mkrumholz/relational_rails,opened,Ability to Delete Plot from Plots Index,enhancement iteration 3 usability,"User Story 23, Child Delete From Childs Index Page (x1)
As a visitor
When I visit the `child_table_name` index page or a parent `child_table_name` index page
Next to every child, I see a link to delete that child
When I click the link
I should be taken to the `child_table_name` index page where I no longer see that child",True,"Ability to Delete Plot from Plots Index - User Story 23, Child Delete From Childs Index Page (x1)
As a visitor
When I visit the `child_table_name` index page or a parent `child_table_name` index page
Next to every child, I see a link to delete that child
When I click the link
I should be taken to the `child_table_name` index page where I no longer see that child",0,ability to delete plot from plots index user story child delete from childs index page as a visitor when i visit the child table name index page or a parent child table name index page next to every child i see a link to delete that child when i click the link i should be taken to the child table name index page where i no longer see that child,0
211288,7200024930.0,IssuesEvent,2018-02-05 17:40:16,robotology-playground/wholeBodyControllers,https://api.github.com/repos/robotology-playground/wholeBodyControllers,opened,Check mex-wholebodymodel status and eventually port the code into wholeBodyControllers,priority: high,[mex-wholeBodyModel](https://github.com/robotology/mex-wholebodymodel) will be used by @ahmadgazar for simulating iCub and Walkman with SEA. It is therefore necessary to check if the code still compiles and eventually port it into this repo.,1.0,Check mex-wholebodymodel status and eventually port the code into wholeBodyControllers - [mex-wholeBodyModel](https://github.com/robotology/mex-wholebodymodel) will be used by @ahmadgazar for simulating iCub and Walkman with SEA. It is therefore necessary to check if the code still compiles and eventually port it into this repo.,0,check mex wholebodymodel status and eventually port the code into wholebodycontrollers will be used by ahmadgazar for simulating icub and walkman with sea it is therefore necessary to check if the code still compiles and eventually port it into this repo ,0
6941,24042230448.0,IssuesEvent,2022-09-16 03:45:58,AdamXweb/awesome-aussie,https://api.github.com/repos/AdamXweb/awesome-aussie,opened,[ADDITION] AmazingCo,Awaiting Review Added to Airtable Automation from Airtable,"### Category
Other
### Software to be added
AmazingCo
### Supporting Material
URL: https://www.amazingco.me
Description: AmazingCo is an experiences and activities creator, helping people all around the world enjoy better real-world experiences.
Size:
HQ: Melbourne
LinkedIn: https://www.linkedin.com/company/amazingco/
#### See Record on Airtable:
https://airtable.com/app0Ox7pXdrBUIn23/tblYbuZoILuVA0X3L/rec9v3eGQqJQlxlVD",1.0,"[ADDITION] AmazingCo - ### Category
Other
### Software to be added
AmazingCo
### Supporting Material
URL: https://www.amazingco.me
Description: AmazingCo is an experiences and activities creator, helping people all around the world enjoy better real-world experiences.
Size:
HQ: Melbourne
LinkedIn: https://www.linkedin.com/company/amazingco/
#### See Record on Airtable:
https://airtable.com/app0Ox7pXdrBUIn23/tblYbuZoILuVA0X3L/rec9v3eGQqJQlxlVD",1, amazingco category other software to be added amazingco supporting material url description amazingco is an experiences and activities creator helping people all around the world enjoy better real world experiences size hq melbourne linkedin see record on airtable ,1
3718,14406688969.0,IssuesEvent,2020-12-03 20:36:48,SynBioDex/SBOL-visual,https://api.github.com/repos/SynBioDex/SBOL-visual,closed,Website should implicitly link to latest release,automation,"The website currently has to be manually updated for each release. Once we generate release artifacts automatically (#119), the website can instead point all of its links just to ""latest release"" URLs in GitHub, such that when we make a new release the website will be mostly automatically updated.
We can also stop linking old release material on the website, and just give an ""old releases here"" pointer to the release collection on GitHub.",1.0,"Website should implicitly link to latest release - The website currently has to be manually updated for each release. Once we generate release artifacts automatically (#119), the website can instead point all of its links just to ""latest release"" URLs in GitHub, such that when we make a new release the website will be mostly automatically updated.
We can also stop linking old release material on the website, and just give an ""old releases here"" pointer to the release collection on GitHub.",1,website should implicitly link to latest release the website currently has to be manually updated for each release once we generate release artifacts automatically the website can instead point all of its links just to latest release urls in github such that when we make a new release the website will be mostly automatically updated we can also stop linking old release material on the website and just give an old releases here pointer to the release collection on github ,1
1566,10343286757.0,IssuesEvent,2019-09-04 08:37:58,elastic/apm-agent-nodejs,https://api.github.com/repos/elastic/apm-agent-nodejs,closed,Jenkins doesn't detect invalid commit messages,[zube]: Inbox automation ci,"We are linting the PR commit messages as part of our Jenkins pipeline, but as seen in #1312, it somehow doesn't work and simply just marks commits as ok even though they are not.",1.0,"Jenkins doesn't detect invalid commit messages - We are linting the PR commit messages as part of our Jenkins pipeline, but as seen in #1312, it somehow doesn't work and simply just marks commits as ok even though they are not.",1,jenkins doesn t detect invalid commit messages we are linting the pr commit messages as part of our jenkins pipeline but as seen in it somehow doesn t work and simply just marks commits as ok even though they are not ,1
492027,14175404073.0,IssuesEvent,2020-11-12 21:32:19,rtCamp/web-stories-wp,https://api.github.com/repos/rtCamp/web-stories-wp,opened,Lightbox Effect - Close Button Issue,priority:high type:bug,"If the cover image option is disabled and a user clicks on one of the stories, the close option does not appear when the lightbox effect is triggered. ",1.0,"Lightbox Effect - Close Button Issue - If the cover image option is disabled and a user clicks on one of the stories, the close option does not appear when the lightbox effect is triggered. ",0,lightbox effect close button issue if the cover image option is disabled and a user clicks on one of the stories the close option does not appear when the lightbox effect is triggered ,0
6442,23152072270.0,IssuesEvent,2022-07-29 09:17:51,longhorn/longhorn,https://api.github.com/repos/longhorn/longhorn,closed,[BUG] The last healthy replica may be evicted or removed,kind/bug area/manager severity/1 require/automation-e2e kind/regression feature/scheduling backport-needed/1.2.5 backport-needed/1.3.1,"## Describe the bug
`test_disk_eviction_with_node_level_soft_anti_affinity_disabled` failed in master-head [edc1b83](https://github.com/longhorn/longhorn/commit/edc1b83c5fe906b1ec4248c0b1279cb13813bda9)
Double verified in release version, the fail situation not happen on V1.3.0
## To Reproduce
Steps to reproduce the behavior:
1. Setup longhorn with 3 nodes
2. Deploy longhorn-test
3. Run `test_disk_eviction_with_node_level_soft_anti_affinity_disabled`
4. After test [steps 6](https://github.com/longhorn/longhorn-tests/blob/18435ee9f786477c5ee1734d4047a47bd1f2e31e/manager/integration/tests/test_node.py#L2600), volume will keep in attaching state and no replica exist
## Expected behavior
Test case should pass
## Log or Support bundle
[longhorn-support-bundle_35fabdcc-d73a-4168-a2dd-65c2298709b1_2022-07-15T06-48-21Z.zip](https://github.com/longhorn/longhorn/files/9118546/longhorn-support-bundle_35fabdcc-d73a-4168-a2dd-65c2298709b1_2022-07-15T06-48-21Z.zip)
## Environment
- Longhorn version: edc1b83
- Installation method (e.g. Rancher Catalog App/Helm/Kubectl): kubectl
- Kubernetes distro (e.g. RKE/K3s/EKS/OpenShift) and version: k3s
- Number of management node in the cluster: 1
- Number of worker node in the cluster: 3
- Node config
- OS type and version: Ubuntu 20.04
## Additional context
https://ci.longhorn.io/job/public/job/master/job/sles/job/amd64/job/longhorn-tests-sles-amd64/186/testReport/junit/tests/test_node/test_disk_eviction_with_node_level_soft_anti_affinity_disabled/
",1.0,"[BUG] The last healthy replica may be evicted or removed - ## Describe the bug
`test_disk_eviction_with_node_level_soft_anti_affinity_disabled` failed in master-head [edc1b83](https://github.com/longhorn/longhorn/commit/edc1b83c5fe906b1ec4248c0b1279cb13813bda9)
Double verified in release version, the fail situation not happen on V1.3.0
## To Reproduce
Steps to reproduce the behavior:
1. Setup longhorn with 3 nodes
2. Deploy longhorn-test
3. Run `test_disk_eviction_with_node_level_soft_anti_affinity_disabled`
4. After test [steps 6](https://github.com/longhorn/longhorn-tests/blob/18435ee9f786477c5ee1734d4047a47bd1f2e31e/manager/integration/tests/test_node.py#L2600), volume will keep in attaching state and no replica exist
## Expected behavior
Test case should pass
## Log or Support bundle
[longhorn-support-bundle_35fabdcc-d73a-4168-a2dd-65c2298709b1_2022-07-15T06-48-21Z.zip](https://github.com/longhorn/longhorn/files/9118546/longhorn-support-bundle_35fabdcc-d73a-4168-a2dd-65c2298709b1_2022-07-15T06-48-21Z.zip)
## Environment
- Longhorn version: edc1b83
- Installation method (e.g. Rancher Catalog App/Helm/Kubectl): kubectl
- Kubernetes distro (e.g. RKE/K3s/EKS/OpenShift) and version: k3s
- Number of management node in the cluster: 1
- Number of worker node in the cluster: 3
- Node config
- OS type and version: Ubuntu 20.04
## Additional context
https://ci.longhorn.io/job/public/job/master/job/sles/job/amd64/job/longhorn-tests-sles-amd64/186/testReport/junit/tests/test_node/test_disk_eviction_with_node_level_soft_anti_affinity_disabled/
",1, the last healthy replica may be evicted or removed describe the bug test disk eviction with node level soft anti affinity disabled failed in master head double verified in release version the fail situation not happen on to reproduce steps to reproduce the behavior setup longhorn with nodes deploy longhorn test run test disk eviction with node level soft anti affinity disabled after test volume will keep in attaching state and no replica exist expected behavior test case should pass log or support bundle environment longhorn version installation method e g rancher catalog app helm kubectl kubectl kubernetes distro e g rke eks openshift and version number of management node in the cluster number of worker node in the cluster node config os type and version ubuntu additional context ,1
3975,15054922549.0,IssuesEvent,2021-02-03 18:04:22,IBM/FHIR,https://api.github.com/repos/IBM/FHIR,opened,Migrate from Bintray,automation,"JFrog is sunsetting their bintray offering.
A replacement is needed
Consider migration to Artifactory or GitHub Packages.
Must complete by May 2021.",1.0,"Migrate from Bintray - JFrog is sunsetting their bintray offering.
A replacement is needed
Consider migration to Artifactory or GitHub Packages.
Must complete by May 2021.",1,migrate from bintray jfrog is sunsetting their bintray offering a replacement is needed consider migration to artifactory or github packages must complete by may ,1
4820,17645936105.0,IssuesEvent,2021-08-20 06:03:35,keptn/keptn,https://api.github.com/repos/keptn/keptn,opened,[doc] Automation for creating documentation for a new release and versioning of release docu,doc research ready-for-refinement release-automation,"## User story
When releasing a new version of Keptn, the documentation is also released based on the same release tag. As a user, I can switch between the release versions, while the latest stable version is shown by default.
### Details
* Using tagging / branching to create release documentation in https://github.com/keptn/keptn.github.io
* On the page, I can switch between the release docu. For example, see istio.io:

* When switching to an older release, the release version is reflected in the URL: https://istio.io/v1.9/
* Consequently, we should not show the documentation for previous releases, but rather the release docu the user has selected:

### Advantage
* By applying this approach, it becomes obsolete to duplicate the docu for each release in: https://github.com/keptn/keptn.github.io/tree/master/content/docs",1.0,"[doc] Automation for creating documentation for a new release and versioning of release docu - ## User story
When releasing a new version of Keptn, the documentation is also released based on the same release tag. As a user, I can switch between the release versions, while the latest stable version is shown by default.
### Details
* Using tagging / branching to create release documentation in https://github.com/keptn/keptn.github.io
* On the page, I can switch between the release docu. For example, see istio.io:

* When switching to an older release, the release version is reflected in the URL: https://istio.io/v1.9/
* Consequently, we should not show the documentation for previous releases, but rather the release docu the user has selected:

### Advantage
* By applying this approach, it becomes obsolete to duplicate the docu for each release in: https://github.com/keptn/keptn.github.io/tree/master/content/docs",1, automation for creating documentation for a new release and versioning of release docu user story when releasing a new version of keptn the documentation is also released based on the same release tag as a user i can switch between the release versions while the latest stable version is shown by default details using tagging branching to create release documentation in on the page i can switch between the release docu for example see istio io when switching to an older release the release version is reflected in the url consequently we should not show the documentation for previous releases but rather the release docu the user has selected advantage by applying this approach it becomes obsolete to duplicate the docu for each release in ,1
8885,3010710554.0,IssuesEvent,2015-07-28 14:32:48,joe-bader/test-repo,https://api.github.com/repos/joe-bader/test-repo,opened,"[CNVERG-54] iPhone 6, iPad 3 mini. Space area: Context Menu: Draw on Canvas: when an user draws something the image isn't displayed ",Crossplatform Mobile Testing QA,"[reporter=""a.shemerey"", created=""Wed, 22 Jul 2015 15:45:05 +0300""]
Log in like an user
Go to a Space area
Open context menu
Click 'Draw on Canvas'
Draw something
Result: when an user draws something the image isn't displayed on the screen, but I can see what I have drown on this space in any other browser / device
",1.0,"[CNVERG-54] iPhone 6, iPad 3 mini. Space area: Context Menu: Draw on Canvas: when an user draws something the image isn't displayed - [reporter=""a.shemerey"", created=""Wed, 22 Jul 2015 15:45:05 +0300""]
Log in like an user
Go to a Space area
Open context menu
Click 'Draw on Canvas'
Draw something
Result: when an user draws something the image isn't displayed on the screen, but I can see what I have drown on this space in any other browser / device
",0, iphone ipad mini space area context menu draw on canvas when an user draws something the image isn t displayed log in like an user go to a space area open context menu click draw on canvas draw something result when an user draws something the image isn t displayed on the screen but i can see what i have drown on this space in any other browser device ,0
344073,24796741785.0,IssuesEvent,2022-10-24 17:57:18,Eleanorgruth/whats-cookin,https://api.github.com/repos/Eleanorgruth/whats-cookin,closed,README.md,documentation,"- [x] Overview of project and goals
- [x] Overview of technologies used, challenges, wins, and any other reflections
- [x] Screenshots/gifs of your app
- [x] List of contributors",1.0,"README.md - - [x] Overview of project and goals
- [x] Overview of technologies used, challenges, wins, and any other reflections
- [x] Screenshots/gifs of your app
- [x] List of contributors",0,readme md overview of project and goals overview of technologies used challenges wins and any other reflections screenshots gifs of your app list of contributors,0
8766,27172225269.0,IssuesEvent,2023-02-17 20:34:15,OneDrive/onedrive-api-docs,https://api.github.com/repos/OneDrive/onedrive-api-docs,closed,App registration is not up to date,area:Docs automation:Closed,"App registration has moved in to Azure. Corresponding settings that should be done under Platforms header is hard to find. //Olof
---
#### Document Details
⚠ *Do not edit this section. It is required for docs.microsoft.com ➟ GitHub issue linking.*
* ID: f02010cc-2715-86ca-b3fe-e4f92e934fdb
* Version Independent ID: 27b3e16e-f80f-32f4-0e4f-69cfcd1cf769
* Content: [Create an app with Microsoft Graph - OneDrive API - OneDrive dev center](https://docs.microsoft.com/en-us/onedrive/developer/rest-api/getting-started/app-registration?view=odsp-graph-online#feedback)
* Content Source: [docs/rest-api/getting-started/app-registration.md](https://github.com/OneDrive/onedrive-api-docs/blob/live/docs/rest-api/getting-started/app-registration.md)
* Product: **onedrive**
* GitHub Login: @rgregg
* Microsoft Alias: **rgregg**",1.0,"App registration is not up to date - App registration has moved in to Azure. Corresponding settings that should be done under Platforms header is hard to find. //Olof
---
#### Document Details
⚠ *Do not edit this section. It is required for docs.microsoft.com ➟ GitHub issue linking.*
* ID: f02010cc-2715-86ca-b3fe-e4f92e934fdb
* Version Independent ID: 27b3e16e-f80f-32f4-0e4f-69cfcd1cf769
* Content: [Create an app with Microsoft Graph - OneDrive API - OneDrive dev center](https://docs.microsoft.com/en-us/onedrive/developer/rest-api/getting-started/app-registration?view=odsp-graph-online#feedback)
* Content Source: [docs/rest-api/getting-started/app-registration.md](https://github.com/OneDrive/onedrive-api-docs/blob/live/docs/rest-api/getting-started/app-registration.md)
* Product: **onedrive**
* GitHub Login: @rgregg
* Microsoft Alias: **rgregg**",1,app registration is not up to date app registration has moved in to azure corresponding settings that should be done under platforms header is hard to find olof document details ⚠ do not edit this section it is required for docs microsoft com ➟ github issue linking id version independent id content content source product onedrive github login rgregg microsoft alias rgregg ,1
4203,15797217504.0,IssuesEvent,2021-04-02 16:18:06,uiowa/uiowa,https://api.github.com/repos/uiowa/uiowa,closed,Replace broken links in admissions AOS miration postImportProcess.,5 points admissions.uiowa.edu automation,"There are 1500+ broken links in the AOS migration.
The report is happening in the `postImportProcess` method now. I think we can stick with that and expand it. We can replace some using the migration map and others using a manual map of source -> destination NIDs that admissions created.
We should double check all fields are being scanned that need to be.",1.0,"Replace broken links in admissions AOS miration postImportProcess. - There are 1500+ broken links in the AOS migration.
The report is happening in the `postImportProcess` method now. I think we can stick with that and expand it. We can replace some using the migration map and others using a manual map of source -> destination NIDs that admissions created.
We should double check all fields are being scanned that need to be.",1,replace broken links in admissions aos miration postimportprocess there are broken links in the aos migration the report is happening in the postimportprocess method now i think we can stick with that and expand it we can replace some using the migration map and others using a manual map of source destination nids that admissions created we should double check all fields are being scanned that need to be ,1
1713,10595012391.0,IssuesEvent,2019-10-09 18:02:27,IBM/ibm-spectrum-scale-csi-operator,https://api.github.com/repos/IBM/ibm-spectrum-scale-csi-operator,closed,Convert Operator deployment to playbook,Component: Automation Component: Bundling Phase: Development Severity: 2 Type: Enhancement Type: wontfix,The Operator deployment shouldn't be Bash. The original bash scripts were written when I wasn't as Ansible literate. I believe a playbook would be easier to comprehend and could ensure stateful information before the operator even triggers,1.0,Convert Operator deployment to playbook - The Operator deployment shouldn't be Bash. The original bash scripts were written when I wasn't as Ansible literate. I believe a playbook would be easier to comprehend and could ensure stateful information before the operator even triggers,1,convert operator deployment to playbook the operator deployment shouldn t be bash the original bash scripts were written when i wasn t as ansible literate i believe a playbook would be easier to comprehend and could ensure stateful information before the operator even triggers,1
978,8953064301.0,IssuesEvent,2019-01-25 18:20:05,mozilla-mobile/fenix,https://api.github.com/repos/mozilla-mobile/fenix,closed,Setup testing track on Google Play,🤖 automation,"Build, sign and upload Nightly builds to Google Play testing track.",1.0,"Setup testing track on Google Play - Build, sign and upload Nightly builds to Google Play testing track.",1,setup testing track on google play build sign and upload nightly builds to google play testing track ,1
88910,3787374593.0,IssuesEvent,2016-03-21 10:21:18,HubTurbo/HubTurbo,https://api.github.com/repos/HubTurbo/HubTurbo,closed,CONTRIBUTING.md should also reference process.md,aspect-devops forFirstTimers priority.medium,"Currently only points to [dev guide](https://github.com/HubTurbo/HubTurbo/blob/master/docs/developerGuide.md) and [workflow.md](https://github.com/HubTurbo/HubTurbo/blob/master/docs/workflow.md). Making [**process.md**](https://github.com/HubTurbo/HubTurbo/blob/master/docs/process.md) immediately visible from [CONTRIBUTING.md](https://github.com/HubTurbo/HubTurbo/blob/master/CONTRIBUTING.md) means that new contributors will be handed the following information on a silver platter:
1. The guidelines and conventions for submitting pull requests
2. Exposes how pull requests are approved for merging.
This change should (hopefully) let new contributors catch simple problems in their PRs without a dev having to step in.",1.0,"CONTRIBUTING.md should also reference process.md - Currently only points to [dev guide](https://github.com/HubTurbo/HubTurbo/blob/master/docs/developerGuide.md) and [workflow.md](https://github.com/HubTurbo/HubTurbo/blob/master/docs/workflow.md). Making [**process.md**](https://github.com/HubTurbo/HubTurbo/blob/master/docs/process.md) immediately visible from [CONTRIBUTING.md](https://github.com/HubTurbo/HubTurbo/blob/master/CONTRIBUTING.md) means that new contributors will be handed the following information on a silver platter:
1. The guidelines and conventions for submitting pull requests
2. Exposes how pull requests are approved for merging.
This change should (hopefully) let new contributors catch simple problems in their PRs without a dev having to step in.",0,contributing md should also reference process md currently only points to and making immediately visible from means that new contributors will be handed the following information on a silver platter the guidelines and conventions for submitting pull requests exposes how pull requests are approved for merging this change should hopefully let new contributors catch simple problems in their prs without a dev having to step in ,0
335546,10155142005.0,IssuesEvent,2019-08-06 09:38:01,webcompat/web-bugs,https://api.github.com/repos/webcompat/web-bugs,closed,m.flickr.com - see bug description,browser-firefox-mobile engine-gecko priority-important,"
**URL**: https://m.flickr.com/#/photos/65665666@N06/8679478921
**Browser / Version**: Firefox Mobile 68.0
**Operating System**: Android
**Tested Another Browser**: No
**Problem type**: Something else
**Description**: content not visible
**Steps to Reproduce**:
It is not possible to see pictures posted on this site
Browser Configuration
None
_From [webcompat.com](https://webcompat.com/) with ❤️_",1.0,"m.flickr.com - see bug description -
**URL**: https://m.flickr.com/#/photos/65665666@N06/8679478921
**Browser / Version**: Firefox Mobile 68.0
**Operating System**: Android
**Tested Another Browser**: No
**Problem type**: Something else
**Description**: content not visible
**Steps to Reproduce**:
It is not possible to see pictures posted on this site
Browser Configuration
None
_From [webcompat.com](https://webcompat.com/) with ❤️_",0,m flickr com see bug description url browser version firefox mobile operating system android tested another browser no problem type something else description content not visible steps to reproduce it is not possible to see pictures posted on this site browser configuration none from with ❤️ ,0
4783,17468711587.0,IssuesEvent,2021-08-06 21:19:19,dotnet/arcade,https://api.github.com/repos/dotnet/arcade,closed,http client timeouts when uploading blobs to storage account during publishing,First Responder Detected By - Automation,"
- [ ] This issue is blocking
- [X] This issue is causing unreasonable pain
We're seeing some failed publishing jobs where we fail to upload a blob because we hit the default 100 second HttpClient timeout. We should see if we can increase the timeout in the Azure libraries, and whether that helps or not.
Some example failures:
* https://dev.azure.com/dnceng/internal/_build/results?buildId=1265525&view=logs&j=ba23343f-f710-5af9-782d-5bd26b102304&t=6e277ba4-1c1e-552d-b96f-db0aeb4be20a
* https://dev.azure.com/dnceng/internal/_build/results?buildId=1264791&view=logs&j=ba23343f-f710-5af9-782d-5bd26b102304&t=6e277ba4-1c1e-552d-b96f-db0aeb4be20a
* https://dev.azure.com/dnceng/internal/_build/results?buildId=1265145&view=logs&j=ba23343f-f710-5af9-782d-5bd26b102304&t=6e277ba4-1c1e-552d-b96f-db0aeb4be20a
The operation that is failing is here: https://github.com/dotnet/arcade/blob/b038a54d9137901e22868692b3d1f5c050e968c8/src/Microsoft.DotNet.Build.Tasks.Feed/src/common/AzureStorageUtils.cs#L85
",1.0,"http client timeouts when uploading blobs to storage account during publishing -
- [ ] This issue is blocking
- [X] This issue is causing unreasonable pain
We're seeing some failed publishing jobs where we fail to upload a blob because we hit the default 100 second HttpClient timeout. We should see if we can increase the timeout in the Azure libraries, and whether that helps or not.
Some example failures:
* https://dev.azure.com/dnceng/internal/_build/results?buildId=1265525&view=logs&j=ba23343f-f710-5af9-782d-5bd26b102304&t=6e277ba4-1c1e-552d-b96f-db0aeb4be20a
* https://dev.azure.com/dnceng/internal/_build/results?buildId=1264791&view=logs&j=ba23343f-f710-5af9-782d-5bd26b102304&t=6e277ba4-1c1e-552d-b96f-db0aeb4be20a
* https://dev.azure.com/dnceng/internal/_build/results?buildId=1265145&view=logs&j=ba23343f-f710-5af9-782d-5bd26b102304&t=6e277ba4-1c1e-552d-b96f-db0aeb4be20a
The operation that is failing is here: https://github.com/dotnet/arcade/blob/b038a54d9137901e22868692b3d1f5c050e968c8/src/Microsoft.DotNet.Build.Tasks.Feed/src/common/AzureStorageUtils.cs#L85
",1,http client timeouts when uploading blobs to storage account during publishing this issue is blocking this issue is causing unreasonable pain we re seeing some failed publishing jobs where we fail to upload a blob because we hit the default second httpclient timeout we should see if we can increase the timeout in the azure libraries and whether that helps or not some example failures the operation that is failing is here ,1
7766,25568323409.0,IssuesEvent,2022-11-30 15:48:12,hackforla/website,https://api.github.com/repos/hackforla/website,opened,ER: github-actions bot removing Draft label,Size: Large Feature: Board/GitHub Maintenance automation role: dev leads Draft size: 0.25pt,"### Emergent Requirement - Problem
The github-actions bot is removing the `Draft` label
### Issue you discovered this emergent requirement in
- #
### Date discovered
### Did you have to do something temporarily
- [ ] YES
- [ ] NO
### Who was involved
@
### What happens if this is not addressed
### Resources
### Recommended Action Items
- [ ] Make a new issue
- [ ] Discuss with team
- [ ] Let a Team Lead know
### Potential solutions [draft]
",1.0,"ER: github-actions bot removing Draft label - ### Emergent Requirement - Problem
The github-actions bot is removing the `Draft` label
### Issue you discovered this emergent requirement in
- #
### Date discovered
### Did you have to do something temporarily
- [ ] YES
- [ ] NO
### Who was involved
@
### What happens if this is not addressed
### Resources
### Recommended Action Items
- [ ] Make a new issue
- [ ] Discuss with team
- [ ] Let a Team Lead know
### Potential solutions [draft]
",1,er github actions bot removing draft label emergent requirement problem the github actions bot is removing the draft label issue you discovered this emergent requirement in date discovered did you have to do something temporarily yes no who was involved what happens if this is not addressed resources recommended action items make a new issue discuss with team let a team lead know potential solutions ,1
4261,15893773621.0,IssuesEvent,2021-04-11 07:31:49,openhab/openhab-core,https://api.github.com/repos/openhab/openhab-core,closed,[automation] Schedule shows disabled rules,PR pending UI automation,The schedule displays all rules - independent of the state. Event when rules are disabled by user they will be displayed in the schedule - this is really confusing.,1.0,[automation] Schedule shows disabled rules - The schedule displays all rules - independent of the state. Event when rules are disabled by user they will be displayed in the schedule - this is really confusing.,1, schedule shows disabled rules the schedule displays all rules independent of the state event when rules are disabled by user they will be displayed in the schedule this is really confusing ,1
112929,14347543510.0,IssuesEvent,2020-11-29 07:52:44,mexyn/statev_v2_issues,https://api.github.com/repos/mexyn/statev_v2_issues,closed,Versetztung mehrerer Ladepunkte ,gamedesign solved,"Jason_Rains
27.11.2020 14:31 Uhr
Versetzung folgender Ladezonen
Firmenhash pfHawick3 (Bild 1)
Firmenhash pfDownVine28 (Bild 2)


Die Ladezonen sind mit größeren Fahrzeugen nicht erreichbar daher bitte die Versetzung an die Stelle an denen sich die Personen auf dem Bild befinden
",1.0,"Versetztung mehrerer Ladepunkte - Jason_Rains
27.11.2020 14:31 Uhr
Versetzung folgender Ladezonen
Firmenhash pfHawick3 (Bild 1)
Firmenhash pfDownVine28 (Bild 2)


Die Ladezonen sind mit größeren Fahrzeugen nicht erreichbar daher bitte die Versetzung an die Stelle an denen sich die Personen auf dem Bild befinden
",0,versetztung mehrerer ladepunkte jason rains uhr versetzung folgender ladezonen firmenhash bild firmenhash bild die ladezonen sind mit größeren fahrzeugen nicht erreichbar daher bitte die versetzung an die stelle an denen sich die personen auf dem bild befinden ,0
119473,10054324421.0,IssuesEvent,2019-07-22 00:38:27,wesnoth/wesnoth,https://api.github.com/repos/wesnoth/wesnoth,closed,Add the asymmetric theme?,Enhancement Ready for testing UI,"Since gloccusv posted his [asymmetric theme](https://forums.wesnoth.org/viewtopic.php?f=6&t=41065&start=15), I've been using a modified version of it ([code](http://sprunge.us/uLV6Pj), [screenshot](https://forums.wesnoth.org/download/file.php?id=83887&mode=view)). My version works best on master (because it uses some of the features from #3852). I've considered packaging it [as an add-on](https://forums.wesnoth.org/viewtopic.php?f=21&t=50213) but I wonder if it'll be easier to just add it to mainline?
*edit* That code patch is just what I'm using right now in my personal branch. I am **not** proposing to just apply that to master as-is; if the concept is acceptable, I'll clean the patch up before merging it.
- [ ] On 2560x1440 the left bar is 1106 pixels high, not full length.",1.0,"Add the asymmetric theme? - Since gloccusv posted his [asymmetric theme](https://forums.wesnoth.org/viewtopic.php?f=6&t=41065&start=15), I've been using a modified version of it ([code](http://sprunge.us/uLV6Pj), [screenshot](https://forums.wesnoth.org/download/file.php?id=83887&mode=view)). My version works best on master (because it uses some of the features from #3852). I've considered packaging it [as an add-on](https://forums.wesnoth.org/viewtopic.php?f=21&t=50213) but I wonder if it'll be easier to just add it to mainline?
*edit* That code patch is just what I'm using right now in my personal branch. I am **not** proposing to just apply that to master as-is; if the concept is acceptable, I'll clean the patch up before merging it.
- [ ] On 2560x1440 the left bar is 1106 pixels high, not full length.",0,add the asymmetric theme since gloccusv posted his i ve been using a modified version of it my version works best on master because it uses some of the features from i ve considered packaging it but i wonder if it ll be easier to just add it to mainline edit that code patch is just what i m using right now in my personal branch i am not proposing to just apply that to master as is if the concept is acceptable i ll clean the patch up before merging it on the left bar is pixels high not full length ,0
4067,15345054762.0,IssuesEvent,2021-02-28 04:52:07,pc2ccs/pc2v9,https://api.github.com/repos/pc2ccs/pc2v9,closed,Load reject.ini from CDP config directory.,automation enhancement,"**Is your feature request related to a problem?**
During a contest/test on 2/13/2021 it was clear that the reject.ini could not be loaded from a CDP.
This feature will automate the loading of judgements from a CDP config directory.
This allows the judgements to be specified in a CDP, otherwise the reject.ini must be copied to
wherever the pc2 server is installed.
**Feature Description**:
When contest.yaml is loaded and if a reject.ini is present in that same directory - load judgements from that reject.ini
Precedence would be:
1 - Load judgements from CDP/config/reject.ini
2 - Load judgements from reject.ini in directory where server started
3 - Load default judgements
Add/Implement for:
-load option
- load yaml on the Admin
- export/write contest yaml
**Have you considered other ways to accomplish the same thing?**
Yes. The reject.ini must be copied to wherever the pc2 server is installed.
**Additional context**:
none.",1.0,"Load reject.ini from CDP config directory. - **Is your feature request related to a problem?**
During a contest/test on 2/13/2021 it was clear that the reject.ini could not be loaded from a CDP.
This feature will automate the loading of judgements from a CDP config directory.
This allows the judgements to be specified in a CDP, otherwise the reject.ini must be copied to
wherever the pc2 server is installed.
**Feature Description**:
When contest.yaml is loaded and if a reject.ini is present in that same directory - load judgements from that reject.ini
Precedence would be:
1 - Load judgements from CDP/config/reject.ini
2 - Load judgements from reject.ini in directory where server started
3 - Load default judgements
Add/Implement for:
-load option
- load yaml on the Admin
- export/write contest yaml
**Have you considered other ways to accomplish the same thing?**
Yes. The reject.ini must be copied to wherever the pc2 server is installed.
**Additional context**:
none.",1,load reject ini from cdp config directory is your feature request related to a problem during a contest test on it was clear that the reject ini could not be loaded from a cdp this feature will automate the loading of judgements from a cdp config directory this allows the judgements to be specified in a cdp otherwise the reject ini must be copied to wherever the server is installed feature description when contest yaml is loaded and if a reject ini is present in that same directory load judgements from that reject ini precedence would be load judgements from cdp config reject ini load judgements from reject ini in directory where server started load default judgements add implement for load option load yaml on the admin export write contest yaml have you considered other ways to accomplish the same thing yes the reject ini must be copied to wherever the server is installed additional context none ,1
6526,23344694284.0,IssuesEvent,2022-08-09 16:48:14,bcgov/api-services-portal,https://api.github.com/repos/bcgov/api-services-portal,closed,FAILED: Automated Tests(10),automation,"Stats: {
""suites"": 40,
""tests"": 302,
""passes"": 292,
""pending"": 0,
""failures"": 10,
""start"": ""2022-08-08T20:21:52.637Z"",
""end"": ""2022-08-08T20:38:05.261Z"",
""duration"": 671495,
""testsRegistered"": 302,
""passPercent"": 96.6887417218543,
""pendingPercent"": 0,
""other"": 0,
""hasOther"": false,
""skipped"": 0,
""hasSkipped"": false
}
Failed Tests:
""Verify that the option to approve request is displayed""
""Grant only \""Namespace.Manage\"" permission to Wendy""
""Verify that all the namespace options and activities are displayed""
""Grant only \""CredentialIssuer.Admin\"" access to Wendy (access manager)""
""Verify that only Authorization Profile option is displayed in Namespace page""
""Grant only \""Namespace.View\"" permission to Mark""
""Verify that the option to approve request is not displayed""
""Verify that service accounts are not created""
""Grant \""GatewayConfig.Publish\"" and \""Namespace.View\"" access to Wendy (access manager)""
""Verify that GWA API allows user to publish the API to Kong gateway""
Run Link: https://github.com/bcgov/api-services-portal/actions/runs/2820624293",1.0,"FAILED: Automated Tests(10) - Stats: {
""suites"": 40,
""tests"": 302,
""passes"": 292,
""pending"": 0,
""failures"": 10,
""start"": ""2022-08-08T20:21:52.637Z"",
""end"": ""2022-08-08T20:38:05.261Z"",
""duration"": 671495,
""testsRegistered"": 302,
""passPercent"": 96.6887417218543,
""pendingPercent"": 0,
""other"": 0,
""hasOther"": false,
""skipped"": 0,
""hasSkipped"": false
}
Failed Tests:
""Verify that the option to approve request is displayed""
""Grant only \""Namespace.Manage\"" permission to Wendy""
""Verify that all the namespace options and activities are displayed""
""Grant only \""CredentialIssuer.Admin\"" access to Wendy (access manager)""
""Verify that only Authorization Profile option is displayed in Namespace page""
""Grant only \""Namespace.View\"" permission to Mark""
""Verify that the option to approve request is not displayed""
""Verify that service accounts are not created""
""Grant \""GatewayConfig.Publish\"" and \""Namespace.View\"" access to Wendy (access manager)""
""Verify that GWA API allows user to publish the API to Kong gateway""
Run Link: https://github.com/bcgov/api-services-portal/actions/runs/2820624293",1,failed automated tests stats suites tests passes pending failures start end duration testsregistered passpercent pendingpercent other hasother false skipped hasskipped false failed tests verify that the option to approve request is displayed grant only namespace manage permission to wendy verify that all the namespace options and activities are displayed grant only credentialissuer admin access to wendy access manager verify that only authorization profile option is displayed in namespace page grant only namespace view permission to mark verify that the option to approve request is not displayed verify that service accounts are not created grant gatewayconfig publish and namespace view access to wendy access manager verify that gwa api allows user to publish the api to kong gateway run link ,1
208588,15895752566.0,IssuesEvent,2021-04-11 15:04:51,lewiswatson55/SEM-Group19,https://api.github.com/repos/lewiswatson55/SEM-Group19,closed,ToString Unit Test (For All Classes),Testing Missing,Working example done for language class in Unit Test File.,1.0,ToString Unit Test (For All Classes) - Working example done for language class in Unit Test File.,0,tostring unit test for all classes working example done for language class in unit test file ,0
8907,27194445312.0,IssuesEvent,2023-02-20 03:06:07,AnthonyMonterrosa/C-sharp-service-stack,https://api.github.com/repos/AnthonyMonterrosa/C-sharp-service-stack,closed,Only Allow PRs That Follow Environment Pipeline.,automation enhancement,"Currently, any branch can have a PR merged into `test` and `production`, but we only want `main`->`test` and `test`->`production` to be possible. I believe this can be enforced with a GitHub Action that fails depending on the branch name and the branch to merge into which are both available from github via `github.head_ref` and `github.base_ref`, respectively.
https://stackoverflow.com/questions/58033366/how-to-get-the-current-branch-within-github-actions",1.0,"Only Allow PRs That Follow Environment Pipeline. - Currently, any branch can have a PR merged into `test` and `production`, but we only want `main`->`test` and `test`->`production` to be possible. I believe this can be enforced with a GitHub Action that fails depending on the branch name and the branch to merge into which are both available from github via `github.head_ref` and `github.base_ref`, respectively.
https://stackoverflow.com/questions/58033366/how-to-get-the-current-branch-within-github-actions",1,only allow prs that follow environment pipeline currently any branch can have a pr merged into test and production but we only want main test and test production to be possible i believe this can be enforced with a github action that fails depending on the branch name and the branch to merge into which are both available from github via github head ref and github base ref respectively ,1
10059,31468842600.0,IssuesEvent,2023-08-30 05:43:52,ntut-open-source-club/practical-tools-for-simple-design,https://api.github.com/repos/ntut-open-source-club/practical-tools-for-simple-design,closed,Fix lint/format warnings and `-Werror`,refactoring automation,"1. [ ] Fix clang-tidy and clang-format warnings
2. [ ] Add `-Werror` in Github Action",1.0,"Fix lint/format warnings and `-Werror` - 1. [ ] Fix clang-tidy and clang-format warnings
2. [ ] Add `-Werror` in Github Action",1,fix lint format warnings and werror fix clang tidy and clang format warnings add werror in github action,1
5780,21076860114.0,IssuesEvent,2022-04-02 09:13:07,SmartDataAnalytics/OpenResearch,https://api.github.com/repos/SmartDataAnalytics/OpenResearch,closed,Duplicate acronyms,WP 2.7 Hosting automation Migration needs manual fixing,"```
Exception: INSERT INTO Event (acronym,ordinal,homepage,title,startDate,endDate) values (:acronym,:ordinal,:homepage,:title,:startDate,:endDate)
failed:UNIQUE constraint failed: Event.acronym
record #281
```
",1.0,"Duplicate acronyms - ```
Exception: INSERT INTO Event (acronym,ordinal,homepage,title,startDate,endDate) values (:acronym,:ordinal,:homepage,:title,:startDate,:endDate)
failed:UNIQUE constraint failed: Event.acronym
record #281
```
",1,duplicate acronyms exception insert into event acronym ordinal homepage title startdate enddate values acronym ordinal homepage title startdate enddate failed unique constraint failed event acronym record ,1
7172,24345571343.0,IssuesEvent,2022-10-02 09:06:41,home-assistant/core,https://api.github.com/repos/home-assistant/core,closed,"Invalid Automation added to automations.yaml, not showing up on the automations list",integration: automation stale,"### The problem
I'm working on a blueprint for automating my lights. I obviously have an issue with it which I'm still figuring out.
The problem is that when I add a new automation based on the broken blueprint NOTHING shows up in the automation list (since it's broken), however it still gets added to the `automations.yaml` file and still tries to run, even if the blueprint is further changed. This causes errors while restarting since HA detects that there is an invalid automation. This is expected, sure.
However there is no indication on the Automations screen that the broken automation is active. I spent 30 minutes changing stuff, removing files, tried to restart HA only to get an error over and over again. Finally I opened the `automations.yaml` file, where I assumed I have not made any changes (I only tinkered with the blueprint and I had no new automations in the UI). There I found like 4-5 automations based on the blueprint that were added (one for each time I attempted to create an automation from the blueprint).
My request would be to display all automations from the `automations.yaml` file, even the ones that have invalid configuration/blueprint, but mark them as disabled and give the user the option to delete them. Since there is obviously a logic for filtering invalid automations, I'm hoping it would be easy to instead display them and provide more visibility for the user.
### What version of Home Assistant Core has the issue?
2022.8.6
### What was the last working version of Home Assistant Core?
_No response_
### What type of installation are you running?
Home Assistant OS
### Integration causing the issue
Automations
### Link to integration documentation on our website
https://www.home-assistant.io/docs/automation/
### Diagnostics information
_No response_
### Example YAML snippet
```yaml
blueprint:
name: Motion Light with Override
description: Turn a light on based on detected motion
domain: automation
input:
input_motion_sensor_entity:
name: Motion Sensor
selector:
entity:
domain: binary_sensor
device_class: door
# device_class: motion
input_light_switch_entity:
name: Light Switch Entity
selector:
entity:
domain: binary_sensor
device_class: plug
# device_class: power
cooldown_period:
name: Cooldown Period
selector:
number:
min: 0
max: 1800
step: 1
mode: box
unit_of_measurement: seconds
target_light_entity:
name: The light to control
selector:
entity:
domain: light
alias: Test Light Motion (Duplicate)
description: """"
trigger:
- platform: state
entity_id:
- !input input_motion_sensor_entity
id: Motion Triggered
from: ""off""
to: ""on""
- platform: state
entity_id:
- !input input_motion_sensor_entity
id: Motion Cleared
from: ""on""
to: ""off""
condition: []
# variables:
# lightSwitchEntity: !input input_light_switch_entity
# cooldownPeriod: !input cooldown_period
action:
- choose:
- conditions:
- condition: and
conditions:
- condition: trigger
id: Motion Triggered
- condition: state
entity_id: !input input_light_switch_entity
state: ""off""
- condition: template
value_template: >-
{{ as_timestamp(now()) - as_timestamp(states[lightSwitchEntity].last_changed) > 10 }}
sequence:
- type: turn_on
entity_id: !input target_light_entity
domain: light
- conditions:
- condition: and
conditions:
- condition: trigger
id: Motion Cleared
- condition: state
entity_id: !input input_light_switch_entity
state: ""off""
- condition: template
value_template: >-
{{ as_timestamp(now()) - as_timestamp(states[lightSwitchEntity].last_changed) > 10 }}
sequence:
- type: turn_off
entity_id: !input target_light_entity
domain: light
default: []
mode: parallel
max: 3
```
### Anything in the logs that might be useful for us?
```txt
2022-08-26 09:07:38.625 ERROR (MainThread) [homeassistant.components.automation] Blueprint Motion Light with Override generated invalid automation with inputs OrderedDict([('input_motion_sensor_entity', 'binary_sensor.washing_machine_door_sensor_contact'), ('input_light_switch_entity', 'binary_sensor.p30pro_is_charging'), ('cooldown_period', 10), ('target_light_entity', 'light.zigbee_bulb')]): Unable to determine action @ data['action'][0]['choose'][0]['sequence'][0]. Got None
```
### Additional information
_No response_",1.0,"Invalid Automation added to automations.yaml, not showing up on the automations list - ### The problem
I'm working on a blueprint for automating my lights. I obviously have an issue with it which I'm still figuring out.
The problem is that when I add a new automation based on the broken blueprint NOTHING shows up in the automation list (since it's broken), however it still gets added to the `automations.yaml` file and still tries to run, even if the blueprint is further changed. This causes errors while restarting since HA detects that there is an invalid automation. This is expected, sure.
However there is no indication on the Automations screen that the broken automation is active. I spent 30 minutes changing stuff, removing files, tried to restart HA only to get an error over and over again. Finally I opened the `automations.yaml` file, where I assumed I have not made any changes (I only tinkered with the blueprint and I had no new automations in the UI). There I found like 4-5 automations based on the blueprint that were added (one for each time I attempted to create an automation from the blueprint).
My request would be to display all automations from the `automations.yaml` file, even the ones that have invalid configuration/blueprint, but mark them as disabled and give the user the option to delete them. Since there is obviously a logic for filtering invalid automations, I'm hoping it would be easy to instead display them and provide more visibility for the user.
### What version of Home Assistant Core has the issue?
2022.8.6
### What was the last working version of Home Assistant Core?
_No response_
### What type of installation are you running?
Home Assistant OS
### Integration causing the issue
Automations
### Link to integration documentation on our website
https://www.home-assistant.io/docs/automation/
### Diagnostics information
_No response_
### Example YAML snippet
```yaml
blueprint:
name: Motion Light with Override
description: Turn a light on based on detected motion
domain: automation
input:
input_motion_sensor_entity:
name: Motion Sensor
selector:
entity:
domain: binary_sensor
device_class: door
# device_class: motion
input_light_switch_entity:
name: Light Switch Entity
selector:
entity:
domain: binary_sensor
device_class: plug
# device_class: power
cooldown_period:
name: Cooldown Period
selector:
number:
min: 0
max: 1800
step: 1
mode: box
unit_of_measurement: seconds
target_light_entity:
name: The light to control
selector:
entity:
domain: light
alias: Test Light Motion (Duplicate)
description: """"
trigger:
- platform: state
entity_id:
- !input input_motion_sensor_entity
id: Motion Triggered
from: ""off""
to: ""on""
- platform: state
entity_id:
- !input input_motion_sensor_entity
id: Motion Cleared
from: ""on""
to: ""off""
condition: []
# variables:
# lightSwitchEntity: !input input_light_switch_entity
# cooldownPeriod: !input cooldown_period
action:
- choose:
- conditions:
- condition: and
conditions:
- condition: trigger
id: Motion Triggered
- condition: state
entity_id: !input input_light_switch_entity
state: ""off""
- condition: template
value_template: >-
{{ as_timestamp(now()) - as_timestamp(states[lightSwitchEntity].last_changed) > 10 }}
sequence:
- type: turn_on
entity_id: !input target_light_entity
domain: light
- conditions:
- condition: and
conditions:
- condition: trigger
id: Motion Cleared
- condition: state
entity_id: !input input_light_switch_entity
state: ""off""
- condition: template
value_template: >-
{{ as_timestamp(now()) - as_timestamp(states[lightSwitchEntity].last_changed) > 10 }}
sequence:
- type: turn_off
entity_id: !input target_light_entity
domain: light
default: []
mode: parallel
max: 3
```
### Anything in the logs that might be useful for us?
```txt
2022-08-26 09:07:38.625 ERROR (MainThread) [homeassistant.components.automation] Blueprint Motion Light with Override generated invalid automation with inputs OrderedDict([('input_motion_sensor_entity', 'binary_sensor.washing_machine_door_sensor_contact'), ('input_light_switch_entity', 'binary_sensor.p30pro_is_charging'), ('cooldown_period', 10), ('target_light_entity', 'light.zigbee_bulb')]): Unable to determine action @ data['action'][0]['choose'][0]['sequence'][0]. Got None
```
### Additional information
_No response_",1,invalid automation added to automations yaml not showing up on the automations list the problem i m working on a blueprint for automating my lights i obviously have an issue with it which i m still figuring out the problem is that when i add a new automation based on the broken blueprint nothing shows up in the automation list since it s broken however it still gets added to the automations yaml file and still tries to run even if the blueprint is further changed this causes errors while restarting since ha detects that there is an invalid automation this is expected sure however there is no indication on the automations screen that the broken automation is active i spent minutes changing stuff removing files tried to restart ha only to get an error over and over again finally i opened the automations yaml file where i assumed i have not made any changes i only tinkered with the blueprint and i had no new automations in the ui there i found like automations based on the blueprint that were added one for each time i attempted to create an automation from the blueprint my request would be to display all automations from the automations yaml file even the ones that have invalid configuration blueprint but mark them as disabled and give the user the option to delete them since there is obviously a logic for filtering invalid automations i m hoping it would be easy to instead display them and provide more visibility for the user what version of home assistant core has the issue what was the last working version of home assistant core no response what type of installation are you running home assistant os integration causing the issue automations link to integration documentation on our website diagnostics information no response example yaml snippet yaml blueprint name motion light with override description turn a light on based on detected motion domain automation input input motion sensor entity name motion sensor selector entity domain binary sensor device class door device class motion input light switch entity name light switch entity selector entity domain binary sensor device class plug device class power cooldown period name cooldown period selector number min max step mode box unit of measurement seconds target light entity name the light to control selector entity domain light alias test light motion duplicate description trigger platform state entity id input input motion sensor entity id motion triggered from off to on platform state entity id input input motion sensor entity id motion cleared from on to off condition variables lightswitchentity input input light switch entity cooldownperiod input cooldown period action choose conditions condition and conditions condition trigger id motion triggered condition state entity id input input light switch entity state off condition template value template as timestamp now as timestamp states last changed sequence type turn on entity id input target light entity domain light conditions condition and conditions condition trigger id motion cleared condition state entity id input input light switch entity state off condition template value template as timestamp now as timestamp states last changed sequence type turn off entity id input target light entity domain light default mode parallel max anything in the logs that might be useful for us txt error mainthread blueprint motion light with override generated invalid automation with inputs ordereddict unable to determine action data got none additional information no response ,1
1840,10920483976.0,IssuesEvent,2019-11-21 21:22:37,sourcegraph/sourcegraph,https://api.github.com/repos/sourcegraph/sourcegraph,opened,a8n: Align API for previews and external changesets,automation,"I know I signed this off earlier, but it turns out to be quite a hassle that `ChangesetPlan` and `ExternalChangeset` are not very similar in their structure. It's just really confusing to use and if we get stuck with it during WebApp development already, how should our customers understand it.
```graphql
type ChangesetPlan {
repository: Repository!
fileDiffs(first: Int): PreviewFileDiffConnection!
}
type PreviewFileDiffConnection {
nodes: [PreviewFileDiff!]!
totalCount: Int
pageInfo: PageInfo!
diffStat: DiffStat!
rawDiff: String!
}
type PreviewFileDiff {
oldPath: String
oldFile: File2
newPath: String
hunks: [FileDiffHunk!]!
stat: DiffStat!
internalID: String!
}
```
```graphql
type ExternalChangeset {
repository: Repository!
diff: RepositoryComparison
... others
}
type RepositoryComparison {
baseRepository: Repository!
headRepository: Repository!
range: GitRevisionRange!
commits(first: Int): GitCommitConnection!
fileDiffs(first: Int): FileDiffConnection!
}
type FileDiffConnection {
nodes: [FileDiff!]!
totalCount: Int
pageInfo: PageInfo!
diffStat: DiffStat!
rawDiff: String!
}
type FileDiff {
oldPath: String
oldFile: File2
newPath: String
+ newFile: File2
+ mostRelevantFile: File2!
hunks: [FileDiffHunk!]!
stat: DiffStat!
internalID: String!
}
```
If we would reintroduce the `RepositoryComparison` on the preview level, the structure would be more aligned and it would also lay the foundation for codeintel on previews, where ultimately it would be useful to have equal `FileDiff` and `PreviewFileDiff`, so the connections can also be the same.
the only thing different would then be the `RepositoryComparison`, and the frontend would use the file at the base and apply the patch of the changesetplan to return the `File2` from a ""virtual file system"" to be provided to the hover providers.
So I'm suggesting here that we reintroduce
```
type PreviewRepositoryComparison {
baseRepository: Repository!
fileDiffs(first: Int): PreviewFileDiffConnection!
}
```",1.0,"a8n: Align API for previews and external changesets - I know I signed this off earlier, but it turns out to be quite a hassle that `ChangesetPlan` and `ExternalChangeset` are not very similar in their structure. It's just really confusing to use and if we get stuck with it during WebApp development already, how should our customers understand it.
```graphql
type ChangesetPlan {
repository: Repository!
fileDiffs(first: Int): PreviewFileDiffConnection!
}
type PreviewFileDiffConnection {
nodes: [PreviewFileDiff!]!
totalCount: Int
pageInfo: PageInfo!
diffStat: DiffStat!
rawDiff: String!
}
type PreviewFileDiff {
oldPath: String
oldFile: File2
newPath: String
hunks: [FileDiffHunk!]!
stat: DiffStat!
internalID: String!
}
```
```graphql
type ExternalChangeset {
repository: Repository!
diff: RepositoryComparison
... others
}
type RepositoryComparison {
baseRepository: Repository!
headRepository: Repository!
range: GitRevisionRange!
commits(first: Int): GitCommitConnection!
fileDiffs(first: Int): FileDiffConnection!
}
type FileDiffConnection {
nodes: [FileDiff!]!
totalCount: Int
pageInfo: PageInfo!
diffStat: DiffStat!
rawDiff: String!
}
type FileDiff {
oldPath: String
oldFile: File2
newPath: String
+ newFile: File2
+ mostRelevantFile: File2!
hunks: [FileDiffHunk!]!
stat: DiffStat!
internalID: String!
}
```
If we would reintroduce the `RepositoryComparison` on the preview level, the structure would be more aligned and it would also lay the foundation for codeintel on previews, where ultimately it would be useful to have equal `FileDiff` and `PreviewFileDiff`, so the connections can also be the same.
the only thing different would then be the `RepositoryComparison`, and the frontend would use the file at the base and apply the patch of the changesetplan to return the `File2` from a ""virtual file system"" to be provided to the hover providers.
So I'm suggesting here that we reintroduce
```
type PreviewRepositoryComparison {
baseRepository: Repository!
fileDiffs(first: Int): PreviewFileDiffConnection!
}
```",1, align api for previews and external changesets i know i signed this off earlier but it turns out to be quite a hassle that changesetplan and externalchangeset are not very similar in their structure it s just really confusing to use and if we get stuck with it during webapp development already how should our customers understand it graphql type changesetplan repository repository filediffs first int previewfilediffconnection type previewfilediffconnection nodes totalcount int pageinfo pageinfo diffstat diffstat rawdiff string type previewfilediff oldpath string oldfile newpath string hunks stat diffstat internalid string graphql type externalchangeset repository repository diff repositorycomparison others type repositorycomparison baserepository repository headrepository repository range gitrevisionrange commits first int gitcommitconnection filediffs first int filediffconnection type filediffconnection nodes totalcount int pageinfo pageinfo diffstat diffstat rawdiff string type filediff oldpath string oldfile newpath string newfile mostrelevantfile hunks stat diffstat internalid string if we would reintroduce the repositorycomparison on the preview level the structure would be more aligned and it would also lay the foundation for codeintel on previews where ultimately it would be useful to have equal filediff and previewfilediff so the connections can also be the same the only thing different would then be the repositorycomparison and the frontend would use the file at the base and apply the patch of the changesetplan to return the from a virtual file system to be provided to the hover providers so i m suggesting here that we reintroduce type previewrepositorycomparison baserepository repository filediffs first int previewfilediffconnection ,1
5720,20841255977.0,IssuesEvent,2022-03-21 00:07:38,theglus/Home-Assistant-Config,https://api.github.com/repos/theglus/Home-Assistant-Config,closed,Setup PC Switchbot,integration automation desktop,"# Requirements
- [x] Create automation to switch KVM when WoL.
- [x] Create automation to switch KVM when Shutdown is triggered.
# Resources",1.0,"Setup PC Switchbot - # Requirements
- [x] Create automation to switch KVM when WoL.
- [x] Create automation to switch KVM when Shutdown is triggered.
# Resources",1,setup pc switchbot requirements create automation to switch kvm when wol create automation to switch kvm when shutdown is triggered resources,1
611358,18953076108.0,IssuesEvent,2021-11-18 17:02:01,unicode-org/icu4x,https://api.github.com/repos/unicode-org/icu4x,opened,Figure out plan for constructing DateTimeFormat for different calendars,C-datetime discuss-priority,"Part of https://github.com/unicode-org/icu4x/issues/1115
The status quo of calendar support is that:
- We support `Date` for different `Calendar`s `C` (there's an `AsCalendar` trait in there but we can mostly ignore it). Dates are strongly typed.
- At _some point_ we would like to add support for `ErasedCalendar` which can contain dates from any calendar. This does not currently exist, but one can imagine it as an enum of calendar values that raises errors when calendars are mixed.
- DateTimeFormat accepts `DateInput` objects. Currently only `Date` supports being used as a `DateInput`. Of course, we want to change that
- DTF data is split by variant; so you have to specify `variant: buddhist` (etc) when loading DTF data
- `DateTimeFormat::try_new()` loads data at construction time, so it too must specify a variant at construction time. It _can_ load multiple variants at once if desired.
We would like to add support for formatting non gregorian calendars with DTF. Some preexisting requirements are:
- **Architectural**: We have an existing architectural decision that data loading should be independent of formatting: You should walk into formatting with the appropriate data loaded already.
- **Performance**: We would strongly prefer to not unconditionally load all calendar data at once
## Option 1: Type parameter on DTF (compile time checks)
```rust
struct DateTimeFormat {...}
impl DateTimeFormat {
fn try_new(...) -> Result
}
trait DateInput {
...
}
```
DTF is parametrized on the calendar type, so at compile time, one must choose to construct `DateTimeFormat` or `DateTimeFormat`. `DateTimeFormat` will only accept `DateInput`s with the same calendar, enforced at compile time.
`DateTimeFormat` will load all calendar data at once.
If you wish to format values from multiple calendars, you have two options:
- At compile time: you can construct a DTF for each calendar you're going to be formatting; given that the dates for different calendars have different types anyway
- At runtime: You can construct a `DTF`, which will accept `Date` as well as specific calendars like `Date` (etc).
Note that the naïve way of writing this can lead to code bloat: Given that the calendar type is only needed at construction time, the way to write this would be to write `DTFInner` which has a `try_new()` that takes in a string or enum value for calendar type, and wrap it in a `DTF` that is a thin wrapper. Otherwise Rust is likely to generate multiple copies of the mostly-identical functions.
For `DTF` to work, `DTF` will need to be able to store a map of calendar data at once. I do not plan to do this immediately, but it's something that can be done when we add support for erased calendars.
This method does not really leave space open for dynamic data loading though I guess that can be achieved on `DTF`.
## Option 2: Runtime option
```rust
struct DateTimeFormat {}
enum CalendarType {
// This can also be a full enum with variants like Gregorian/Buddhist/etc
BCP(&'static str),
All
}
impl DateTimeFormat {
fn try_new(..., c: CalendarType) -> Result {}
// OR
// This is essentially a fancier way of writing the above function
// without requiring an additional parameter
fn try_new(...) -> Result {}
}
trait DateInput {
const NeededCalendarType: CalendarType;
...
}
```
Here we specify the calendar we need at data load time, and DTF will attempt to load this data. If you attempt to format a date that uses a different calendar, DTF will error at runtime.
Similarly to the previous option, if and when we add support for `Erased` calendars and/or `CalendarType::All`, we'll need to have this contain some kind of map from calendar type to loaded data. I do not intend to do this immediately but I want to plan for it.
The nice thing is that this can be extended to support dynamic data loading in a much cleaner way (see below section).
Pros:
- More flexible at runtime
- Allows for dynamic data loading
Cons:
- Will error at runtime, not compile time
### Option 2 extension: dynamic data loading
This can work on Option 1 (`impl DateTimeFormat`) as well, but it's cleaner with Option 2. We can add dynamic data loading of the form
```rust
impl DateTimeFormat {
fn load_data_for::(&mut self);
// or, for convenience
fn load_data_for_date::(&mut self, d: &D);
}
```
that allows users to dynamically stuff more data into the DTF as needed.
## Option 3: Give up on a requirement
We can also give up on either the **Architectural** or **Performance** constraints as given above. I'm not super happy with doing this, but it's worth listing as an option.
Thoughts?
Input requested from:
- [ ] @zbraniecki
- [ ] @gregtatum
- [ ] @nordzilla
- [ ] @sffc
",1.0,"Figure out plan for constructing DateTimeFormat for different calendars - Part of https://github.com/unicode-org/icu4x/issues/1115
The status quo of calendar support is that:
- We support `Date` for different `Calendar`s `C` (there's an `AsCalendar` trait in there but we can mostly ignore it). Dates are strongly typed.
- At _some point_ we would like to add support for `ErasedCalendar` which can contain dates from any calendar. This does not currently exist, but one can imagine it as an enum of calendar values that raises errors when calendars are mixed.
- DateTimeFormat accepts `DateInput` objects. Currently only `Date` supports being used as a `DateInput`. Of course, we want to change that
- DTF data is split by variant; so you have to specify `variant: buddhist` (etc) when loading DTF data
- `DateTimeFormat::try_new()` loads data at construction time, so it too must specify a variant at construction time. It _can_ load multiple variants at once if desired.
We would like to add support for formatting non gregorian calendars with DTF. Some preexisting requirements are:
- **Architectural**: We have an existing architectural decision that data loading should be independent of formatting: You should walk into formatting with the appropriate data loaded already.
- **Performance**: We would strongly prefer to not unconditionally load all calendar data at once
## Option 1: Type parameter on DTF (compile time checks)
```rust
struct DateTimeFormat {...}
impl DateTimeFormat {
fn try_new(...) -> Result
}
trait DateInput {
...
}
```
DTF is parametrized on the calendar type, so at compile time, one must choose to construct `DateTimeFormat` or `DateTimeFormat`. `DateTimeFormat` will only accept `DateInput`s with the same calendar, enforced at compile time.
`DateTimeFormat` will load all calendar data at once.
If you wish to format values from multiple calendars, you have two options:
- At compile time: you can construct a DTF for each calendar you're going to be formatting; given that the dates for different calendars have different types anyway
- At runtime: You can construct a `DTF`, which will accept `Date` as well as specific calendars like `Date` (etc).
Note that the naïve way of writing this can lead to code bloat: Given that the calendar type is only needed at construction time, the way to write this would be to write `DTFInner` which has a `try_new()` that takes in a string or enum value for calendar type, and wrap it in a `DTF` that is a thin wrapper. Otherwise Rust is likely to generate multiple copies of the mostly-identical functions.
For `DTF` to work, `DTF` will need to be able to store a map of calendar data at once. I do not plan to do this immediately, but it's something that can be done when we add support for erased calendars.
This method does not really leave space open for dynamic data loading though I guess that can be achieved on `DTF`.
## Option 2: Runtime option
```rust
struct DateTimeFormat {}
enum CalendarType {
// This can also be a full enum with variants like Gregorian/Buddhist/etc
BCP(&'static str),
All
}
impl DateTimeFormat {
fn try_new(..., c: CalendarType) -> Result {}
// OR
// This is essentially a fancier way of writing the above function
// without requiring an additional parameter
fn try_new(...) -> Result {}
}
trait DateInput {
const NeededCalendarType: CalendarType;
...
}
```
Here we specify the calendar we need at data load time, and DTF will attempt to load this data. If you attempt to format a date that uses a different calendar, DTF will error at runtime.
Similarly to the previous option, if and when we add support for `Erased` calendars and/or `CalendarType::All`, we'll need to have this contain some kind of map from calendar type to loaded data. I do not intend to do this immediately but I want to plan for it.
The nice thing is that this can be extended to support dynamic data loading in a much cleaner way (see below section).
Pros:
- More flexible at runtime
- Allows for dynamic data loading
Cons:
- Will error at runtime, not compile time
### Option 2 extension: dynamic data loading
This can work on Option 1 (`impl DateTimeFormat`) as well, but it's cleaner with Option 2. We can add dynamic data loading of the form
```rust
impl DateTimeFormat {
fn load_data_for::(&mut self);
// or, for convenience
fn load_data_for_date::(&mut self, d: &D);
}
```
that allows users to dynamically stuff more data into the DTF as needed.
## Option 3: Give up on a requirement
We can also give up on either the **Architectural** or **Performance** constraints as given above. I'm not super happy with doing this, but it's worth listing as an option.
Thoughts?
Input requested from:
- [ ] @zbraniecki
- [ ] @gregtatum
- [ ] @nordzilla
- [ ] @sffc
",0,figure out plan for constructing datetimeformat for different calendars part of the status quo of calendar support is that we support date for different calendar s c there s an ascalendar trait in there but we can mostly ignore it dates are strongly typed at some point we would like to add support for erasedcalendar which can contain dates from any calendar this does not currently exist but one can imagine it as an enum of calendar values that raises errors when calendars are mixed datetimeformat accepts dateinput objects currently only date supports being used as a dateinput of course we want to change that dtf data is split by variant so you have to specify variant buddhist etc when loading dtf data datetimeformat try new loads data at construction time so it too must specify a variant at construction time it can load multiple variants at once if desired we would like to add support for formatting non gregorian calendars with dtf some preexisting requirements are architectural we have an existing architectural decision that data loading should be independent of formatting you should walk into formatting with the appropriate data loaded already performance we would strongly prefer to not unconditionally load all calendar data at once option type parameter on dtf compile time checks rust struct datetimeformat impl datetimeformat fn try new result trait dateinput dtf is parametrized on the calendar type so at compile time one must choose to construct datetimeformat or datetimeformat datetimeformat will only accept dateinput s with the same calendar enforced at compile time datetimeformat will load all calendar data at once if you wish to format values from multiple calendars you have two options at compile time you can construct a dtf for each calendar you re going to be formatting given that the dates for different calendars have different types anyway at runtime you can construct a dtf which will accept date as well as specific calendars like date etc note that the naïve way of writing this can lead to code bloat given that the calendar type is only needed at construction time the way to write this would be to write dtfinner which has a try new that takes in a string or enum value for calendar type and wrap it in a dtf that is a thin wrapper otherwise rust is likely to generate multiple copies of the mostly identical functions for dtf to work dtf will need to be able to store a map of calendar data at once i do not plan to do this immediately but it s something that can be done when we add support for erased calendars this method does not really leave space open for dynamic data loading though i guess that can be achieved on dtf option runtime option rust struct datetimeformat enum calendartype this can also be a full enum with variants like gregorian buddhist etc bcp static str all impl datetimeformat fn try new c calendartype result or this is essentially a fancier way of writing the above function without requiring an additional parameter fn try new result trait dateinput const neededcalendartype calendartype here we specify the calendar we need at data load time and dtf will attempt to load this data if you attempt to format a date that uses a different calendar dtf will error at runtime similarly to the previous option if and when we add support for erased calendars and or calendartype all we ll need to have this contain some kind of map from calendar type to loaded data i do not intend to do this immediately but i want to plan for it the nice thing is that this can be extended to support dynamic data loading in a much cleaner way see below section pros more flexible at runtime allows for dynamic data loading cons will error at runtime not compile time option extension dynamic data loading this can work on option impl datetimeformat as well but it s cleaner with option we can add dynamic data loading of the form rust impl datetimeformat fn load data for mut self or for convenience fn load data for date mut self d d that allows users to dynamically stuff more data into the dtf as needed option give up on a requirement we can also give up on either the architectural or performance constraints as given above i m not super happy with doing this but it s worth listing as an option thoughts input requested from zbraniecki gregtatum nordzilla sffc ,0
512989,14913745432.0,IssuesEvent,2021-01-22 14:32:58,arkhn/fhir-river,https://api.github.com/repos/arkhn/fhir-river,closed,move cleaning-scripts to river,enhancement high priority,"the ""scripts"" API should be moved over to river
It should not clone the git repo when starting.
The only route we need to keep is the one that lists scripts.
This is blocking at RMS since we cannot make outside calls (therefore cloning the repo does not work)",1.0,"move cleaning-scripts to river - the ""scripts"" API should be moved over to river
It should not clone the git repo when starting.
The only route we need to keep is the one that lists scripts.
This is blocking at RMS since we cannot make outside calls (therefore cloning the repo does not work)",0,move cleaning scripts to river the scripts api should be moved over to river it should not clone the git repo when starting the only route we need to keep is the one that lists scripts this is blocking at rms since we cannot make outside calls therefore cloning the repo does not work ,0
7268,24542176241.0,IssuesEvent,2022-10-12 05:25:45,DevExpress/testcafe,https://api.github.com/repos/DevExpress/testcafe,closed,Inform a user that a click was made on a wrong element,TYPE: enhancement SYSTEM: automations,"When testcafe clicks an element which is overlapped by the second element, it waits for the first element to appear. If the first element do not appear during selector timeout period, then it clicks the second element.
It would be nice to inform users that a click is made on a wrong element.
```HTML
```
```JS
fixture `fixture`
.page `../pages/index.html`;
test(`test`, async t => {
await t
.click('#child1');
});
```",1.0,"Inform a user that a click was made on a wrong element - When testcafe clicks an element which is overlapped by the second element, it waits for the first element to appear. If the first element do not appear during selector timeout period, then it clicks the second element.
It would be nice to inform users that a click is made on a wrong element.
```HTML
```
```JS
fixture `fixture`
.page `../pages/index.html`;
test(`test`, async t => {
await t
.click('#child1');
});
```",1,inform a user that a click was made on a wrong element when testcafe clicks an element which is overlapped by the second element it waits for the first element to appear if the first element do not appear during selector timeout period then it clicks the second element it would be nice to inform users that a click is made on a wrong element html parent position relative width height background color blue position absolute top left width height background color red position absolute top left js fixture fixture page pages index html test test async t await t click ,1
8164,26354694589.0,IssuesEvent,2023-01-11 08:48:43,nocodb/nocodb,https://api.github.com/repos/nocodb/nocodb,closed,After Update automation has access to old field value,🔦 Type: Feature 🚘 Scope : Automation,"Automation ""After Update"" event should have access to the old value of the field, not only new.",1.0,"After Update automation has access to old field value - Automation ""After Update"" event should have access to the old value of the field, not only new.",1,after update automation has access to old field value automation after update event should have access to the old value of the field not only new ,1
7239,24501184140.0,IssuesEvent,2022-10-10 12:56:06,smcnab1/op-question-mark,https://api.github.com/repos/smcnab1/op-question-mark,opened,[BUG] Reduce Notification Spam,🔬Status: Review Needed 🐛Type: Bug 🏔Priority: High 🚗For: Automations,"## **🐛Bug Report**
**Describe the bug**
* Often get repeat notifications for same things (Mail, Temp, Alarm Arm/Disarm)
---
**To Reproduce**
1.
2.
3.
4.
---
**Expected behavior**
* Look at possible [solutions](https://www.facebook.com/groups/HomeAssistant/permalink/3334780796793267/) like time period wait
---
**Screenshots**
---
**Desktop (please complete the following information):**
- OS:
- Browser
- Version
**Smartphone (please complete the following information):**
- Device:
- OS:
- Browser
- Version
---
**Additional context**
*
",1.0,"[BUG] Reduce Notification Spam - ## **🐛Bug Report**
**Describe the bug**
* Often get repeat notifications for same things (Mail, Temp, Alarm Arm/Disarm)
---
**To Reproduce**
1.
2.
3.
4.
---
**Expected behavior**
* Look at possible [solutions](https://www.facebook.com/groups/HomeAssistant/permalink/3334780796793267/) like time period wait
---
**Screenshots**
---
**Desktop (please complete the following information):**
- OS:
- Browser
- Version
**Smartphone (please complete the following information):**
- Device:
- OS:
- Browser
- Version
---
**Additional context**
*
",1, reduce notification spam 🐛bug report describe the bug often get repeat notifications for same things mail temp alarm arm disarm to reproduce steps to reproduce the error e g use x argument navigate to fill this information go to see error expected behavior look at possible like time period wait screenshots desktop please complete the following information use all the applicable bulleted list element for this specific issue and remove all the bulleted list elements that are not relevant for this issue os browser version smartphone please complete the following information device os browser version additional context 📛📛📛📛📛📛📛📛📛📛📛📛📛📛📛📛📛📛📛📛📛📛📛📛📛📛📛📛📛📛 oh hi there 😄 to expedite issue processing please search open and closed issues before submitting a new one please read our rules of conduct at this repository s github code of conduct md 📛📛📛📛📛📛📛📛📛📛📛📛📛📛📛📛📛📛📛📛📛📛📛📛📛📛📛📛📛📛📛📛 ,1
35886,9671527146.0,IssuesEvent,2019-05-21 23:12:25,hashicorp/packer,https://api.github.com/repos/hashicorp/packer,closed,Issue apparently very similar to 7500 on Windows 10,builder/hyperv duplicate regression,"Megan, I am running into the same issue as described in 7500 but on Windows 10. I pulled the latest build (1.4.1 circa 5.21.2019). The error seems nearly identical so was not sure if it was a platform difference or something else environmental as the last comment in issue 7500 refers to it being fixed in the master.
Running with elevated rights on:
OS Name Microsoft Windows 10 Pro
Version 10.0.17763 Build 17763
User has been added to the Hyper-V Admin Group
Log at error:
==> hyperv-iso: Enabling Integration Service...
==> hyperv-iso: PowerShell error: Hyper-V\Add-VMDvdDrive : Failed to add device 'Virtual CD/DVD Disk'.
==> hyperv-iso: Hyper-V Virtual Machine Management service Account does not have permission to open attachment.
==> hyperv-iso: 'dev-hyperv-base_name' failed to add device 'Virtual CD/DVD Disk'. (Virtual machine ID 2EA8890E-AF47-4C06-A9F4-1199E677B42F)
==> hyperv-iso: 'dev-hyperv-base_name': Hyper-V Virtual Machine Management service account does not have permission required to open attachment
==> hyperv-iso: 'C:\Users\xxxx\repos\packer_builds\packer_cache\08478213f4bb76a558776915c085b9de13744f87.iso'. Error: 'General access denied error' (0x80070005). (Virtual
==> hyperv-iso: machine ID 2EA8890E-AF47-4C06-A9F4-1199E677B42F)
==> hyperv-iso: At C:\Users\xxxx\AppData\Local\Temp\powershell294707329.ps1:3 char:18
==> hyperv-iso: + ... ontroller = Hyper-V\Add-VMDvdDrive -VMName $vmName -path $isoPath -Pa ...
==> hyperv-iso: + ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
==> hyperv-iso: + CategoryInfo : PermissionDenied: (:) [Add-VMDvdDrive], VirtualizationException
==> hyperv-iso: + FullyQualifiedErrorId : AccessDenied,Microsoft.HyperV.PowerShell.Commands.AddVMDvdDrive
==> hyperv-iso: Unregistering and deleting virtual machine...
==> hyperv-iso: Deleting output directory...
==> hyperv-iso: Deleting build directory...
Build 'hyperv-iso' errored: PowerShell error: Hyper-V\Add-VMDvdDrive : Failed to add device 'Virtual CD/DVD Disk'.
Reference:
I was finally able to create a repro case for this. It turns out that it's already fixed on the master branch, probably by the PR Adrien linked above. If you're in a big rush for a fix you can use the Packer [nightly](https://github.com/hashicorp/packer/releases/tag/nightly) build until we release 1.4.1 tomorrow-ish :)
_Originally posted by @SwampDragons in https://github.com/hashicorp/packer/issues/7500#issuecomment-492389793_",1.0,"Issue apparently very similar to 7500 on Windows 10 - Megan, I am running into the same issue as described in 7500 but on Windows 10. I pulled the latest build (1.4.1 circa 5.21.2019). The error seems nearly identical so was not sure if it was a platform difference or something else environmental as the last comment in issue 7500 refers to it being fixed in the master.
Running with elevated rights on:
OS Name Microsoft Windows 10 Pro
Version 10.0.17763 Build 17763
User has been added to the Hyper-V Admin Group
Log at error:
==> hyperv-iso: Enabling Integration Service...
==> hyperv-iso: PowerShell error: Hyper-V\Add-VMDvdDrive : Failed to add device 'Virtual CD/DVD Disk'.
==> hyperv-iso: Hyper-V Virtual Machine Management service Account does not have permission to open attachment.
==> hyperv-iso: 'dev-hyperv-base_name' failed to add device 'Virtual CD/DVD Disk'. (Virtual machine ID 2EA8890E-AF47-4C06-A9F4-1199E677B42F)
==> hyperv-iso: 'dev-hyperv-base_name': Hyper-V Virtual Machine Management service account does not have permission required to open attachment
==> hyperv-iso: 'C:\Users\xxxx\repos\packer_builds\packer_cache\08478213f4bb76a558776915c085b9de13744f87.iso'. Error: 'General access denied error' (0x80070005). (Virtual
==> hyperv-iso: machine ID 2EA8890E-AF47-4C06-A9F4-1199E677B42F)
==> hyperv-iso: At C:\Users\xxxx\AppData\Local\Temp\powershell294707329.ps1:3 char:18
==> hyperv-iso: + ... ontroller = Hyper-V\Add-VMDvdDrive -VMName $vmName -path $isoPath -Pa ...
==> hyperv-iso: + ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
==> hyperv-iso: + CategoryInfo : PermissionDenied: (:) [Add-VMDvdDrive], VirtualizationException
==> hyperv-iso: + FullyQualifiedErrorId : AccessDenied,Microsoft.HyperV.PowerShell.Commands.AddVMDvdDrive
==> hyperv-iso: Unregistering and deleting virtual machine...
==> hyperv-iso: Deleting output directory...
==> hyperv-iso: Deleting build directory...
Build 'hyperv-iso' errored: PowerShell error: Hyper-V\Add-VMDvdDrive : Failed to add device 'Virtual CD/DVD Disk'.
Reference:
I was finally able to create a repro case for this. It turns out that it's already fixed on the master branch, probably by the PR Adrien linked above. If you're in a big rush for a fix you can use the Packer [nightly](https://github.com/hashicorp/packer/releases/tag/nightly) build until we release 1.4.1 tomorrow-ish :)
_Originally posted by @SwampDragons in https://github.com/hashicorp/packer/issues/7500#issuecomment-492389793_",0,issue apparently very similar to on windows megan i am running into the same issue as described in but on windows i pulled the latest build circa the error seems nearly identical so was not sure if it was a platform difference or something else environmental as the last comment in issue refers to it being fixed in the master running with elevated rights on os name microsoft windows pro version build user has been added to the hyper v admin group log at error hyperv iso enabling integration service hyperv iso powershell error hyper v add vmdvddrive failed to add device virtual cd dvd disk hyperv iso hyper v virtual machine management service account does not have permission to open attachment hyperv iso dev hyperv base name failed to add device virtual cd dvd disk virtual machine id hyperv iso dev hyperv base name hyper v virtual machine management service account does not have permission required to open attachment hyperv iso c users xxxx repos packer builds packer cache iso error general access denied error virtual hyperv iso machine id hyperv iso at c users xxxx appdata local temp char hyperv iso ontroller hyper v add vmdvddrive vmname vmname path isopath pa hyperv iso hyperv iso categoryinfo permissiondenied virtualizationexception hyperv iso fullyqualifiederrorid accessdenied microsoft hyperv powershell commands addvmdvddrive hyperv iso unregistering and deleting virtual machine hyperv iso deleting output directory hyperv iso deleting build directory build hyperv iso errored powershell error hyper v add vmdvddrive failed to add device virtual cd dvd disk reference i was finally able to create a repro case for this it turns out that it s already fixed on the master branch probably by the pr adrien linked above if you re in a big rush for a fix you can use the packer build until we release tomorrow ish originally posted by swampdragons in ,0
952,8823562382.0,IssuesEvent,2019-01-02 14:06:54,arcus-azure/arcus.security,https://api.github.com/repos/arcus-azure/arcus.security,closed,Provide a release pipeline for NuGet.org,automation management,"Provide a release pipeline for NuGet.org.
### Checklist
- [x] Build the codebase
- [x] Run test suite
- [x] Tag the codebase on success
- [x] Create a GitHub pre-release for a preview releases
- [x] Create a GitHub release for full releases
- [x] Push all NuGet packages to NuGet.org",1.0,"Provide a release pipeline for NuGet.org - Provide a release pipeline for NuGet.org.
### Checklist
- [x] Build the codebase
- [x] Run test suite
- [x] Tag the codebase on success
- [x] Create a GitHub pre-release for a preview releases
- [x] Create a GitHub release for full releases
- [x] Push all NuGet packages to NuGet.org",1,provide a release pipeline for nuget org provide a release pipeline for nuget org checklist build the codebase run test suite tag the codebase on success create a github pre release for a preview releases create a github release for full releases push all nuget packages to nuget org,1
1808,10840581476.0,IssuesEvent,2019-11-12 08:40:12,elastic/opbeans-ruby,https://api.github.com/repos/elastic/opbeans-ruby,closed,It fails when you make some HTTP request,[zube]: In Review automation bug,"Opbeand ruby does not respond to any HTTP request and shows this error
```
Puma caught this error: undefined method `set_label' for ElasticAPM:Module (NoMethodError)
/app/app/controllers/application_controller.rb:7:in `block in '
/usr/local/bundle/gems/activesupport-5.2.3/lib/active_support/callbacks.rb:426:in `instance_exec'
```",1.0,"It fails when you make some HTTP request - Opbeand ruby does not respond to any HTTP request and shows this error
```
Puma caught this error: undefined method `set_label' for ElasticAPM:Module (NoMethodError)
/app/app/controllers/application_controller.rb:7:in `block in '
/usr/local/bundle/gems/activesupport-5.2.3/lib/active_support/callbacks.rb:426:in `instance_exec'
```",1,it fails when you make some http request opbeand ruby does not respond to any http request and shows this error puma caught this error undefined method set label for elasticapm module nomethoderror app app controllers application controller rb in block in usr local bundle gems activesupport lib active support callbacks rb in instance exec ,1
4245,15872537744.0,IssuesEvent,2021-04-09 00:07:18,SmartDataAnalytics/OpenResearch,https://api.github.com/repos/SmartDataAnalytics/OpenResearch,closed,Text Values in Ordinal Field,Migration WP 2.7 Hosting automation bug,Ordinal Field should be strictly numeric but there are text values in it for example 1st.,1.0,Text Values in Ordinal Field - Ordinal Field should be strictly numeric but there are text values in it for example 1st.,1,text values in ordinal field ordinal field should be strictly numeric but there are text values in it for example ,1
31000,25239777109.0,IssuesEvent,2022-11-15 06:06:52,leanprover/vscode-lean4,https://api.github.com/repos/leanprover/vscode-lean4,closed,get es-module-shims from NPM,infrastructure,"See discussion under https://github.com/leanprover/vscode-lean4/pull/167#issuecomment-1171741061,
specifically:
> I don't remember why I wrote 'it's not on NPM' even though it actually is. I may have just missed it. Note that the [version](https://ga.jspm.io/npm:es-module-shims@1.5.8/dist/es-module-shims.js) recommended in their README is different than [the NPM version](https://unpkg.com/es-module-shims@1.5.8/dist/es-module-shims.js), but it might just be minified. If you can get the NPM one working, it would be better. Since the point is to make Gitpod work, that would be the thing to test.",1.0,"get es-module-shims from NPM - See discussion under https://github.com/leanprover/vscode-lean4/pull/167#issuecomment-1171741061,
specifically:
> I don't remember why I wrote 'it's not on NPM' even though it actually is. I may have just missed it. Note that the [version](https://ga.jspm.io/npm:es-module-shims@1.5.8/dist/es-module-shims.js) recommended in their README is different than [the NPM version](https://unpkg.com/es-module-shims@1.5.8/dist/es-module-shims.js), but it might just be minified. If you can get the NPM one working, it would be better. Since the point is to make Gitpod work, that would be the thing to test.",0,get es module shims from npm see discussion under specifically i don t remember why i wrote it s not on npm even though it actually is i may have just missed it note that the recommended in their readme is different than but it might just be minified if you can get the npm one working it would be better since the point is to make gitpod work that would be the thing to test ,0
4551,16835300645.0,IssuesEvent,2021-06-18 11:11:43,keptn/keptn,https://api.github.com/repos/keptn/keptn,opened,Create list of dependencies of Keptn Core,automation,"## Action Item
Create a list of dependencies + their licences of all components of Keptn core and monitoring related services.
Only consider **direct** dependencies.
### Details
Try using a tool that detects dependencies and their licences **automatically**
Possible candidates:
* https://github.com/google/go-licenses
* https://github.com/ribice/glice
* ... ? tba
## Acceptance Criteria
- [ ] Easy readable table containing all dependencies of all Keptn components, their version as well as their licence available.
- [ ] Nice to have: this analysis is reproducible using a GH action ;)
",1.0,"Create list of dependencies of Keptn Core - ## Action Item
Create a list of dependencies + their licences of all components of Keptn core and monitoring related services.
Only consider **direct** dependencies.
### Details
Try using a tool that detects dependencies and their licences **automatically**
Possible candidates:
* https://github.com/google/go-licenses
* https://github.com/ribice/glice
* ... ? tba
## Acceptance Criteria
- [ ] Easy readable table containing all dependencies of all Keptn components, their version as well as their licence available.
- [ ] Nice to have: this analysis is reproducible using a GH action ;)
",1,create list of dependencies of keptn core action item create a list of dependencies their licences of all components of keptn core and monitoring related services only consider direct dependencies details try using a tool that detects dependencies and their licences automatically possible candidates tba acceptance criteria easy readable table containing all dependencies of all keptn components their version as well as their licence available nice to have this analysis is reproducible using a gh action ,1
571356,17023289524.0,IssuesEvent,2021-07-03 01:15:14,tomhughes/trac-tickets,https://api.github.com/repos/tomhughes/trac-tickets,closed,[PATCH] Expand tables in the properties dock to the available width by default,Component: merkaartor Priority: minor Resolution: fixed Type: enhancement,"**[Submitted to the original trac issue database at 10.29am, Saturday, 30th August 2008]**
It would be nice if the tag & role table views in the properties dock automatically expanded to the width of the dock instead of always having to drag them out wider, the attached patch (against current subversion) implements this behavior.
",1.0,"[PATCH] Expand tables in the properties dock to the available width by default - **[Submitted to the original trac issue database at 10.29am, Saturday, 30th August 2008]**
It would be nice if the tag & role table views in the properties dock automatically expanded to the width of the dock instead of always having to drag them out wider, the attached patch (against current subversion) implements this behavior.
",0, expand tables in the properties dock to the available width by default it would be nice if the tag role table views in the properties dock automatically expanded to the width of the dock instead of always having to drag them out wider the attached patch against current subversion implements this behavior ,0
201583,23018614184.0,IssuesEvent,2022-07-22 01:07:45,valdisiljuconoks/MvcAreasForEPiServer,https://api.github.com/repos/valdisiljuconoks/MvcAreasForEPiServer,closed,CVE-2018-0765 (High) detected in system.security.cryptography.xml.4.4.1.nupkg - autoclosed,security vulnerability,"## CVE-2018-0765 - High Severity Vulnerability
Vulnerable Library - system.security.cryptography.xml.4.4.1.nupkg
Provides classes to support the creation and validation of XML digital signatures. The classes in th...
A denial of service vulnerability exists when .NET and .NET Core improperly process XML documents, aka "".NET and .NET Core Denial of Service Vulnerability."" This affects Microsoft .NET Framework 2.0, Microsoft .NET Framework 3.0, Microsoft .NET Framework 4.7.1, Microsoft .NET Framework 4.6/4.6.1/4.6.2/4.7/4.7.1, Microsoft .NET Framework 4.5.2, Microsoft .NET Framework 4.7/4.7.1, Microsoft .NET Framework 4.6, Microsoft .NET Framework 3.5, Microsoft .NET Framework 3.5.1, Microsoft .NET Framework 4.6/4.6.1/4.6.2, Microsoft .NET Framework 4.6.2/4.7/4.7.1, .NET Core 2.0, Microsoft .NET Framework 4.7.2.
***
Step up your Open Source Security Game with WhiteSource [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)",True,"CVE-2018-0765 (High) detected in system.security.cryptography.xml.4.4.1.nupkg - autoclosed - ## CVE-2018-0765 - High Severity Vulnerability
Vulnerable Library - system.security.cryptography.xml.4.4.1.nupkg
Provides classes to support the creation and validation of XML digital signatures. The classes in th...
A denial of service vulnerability exists when .NET and .NET Core improperly process XML documents, aka "".NET and .NET Core Denial of Service Vulnerability."" This affects Microsoft .NET Framework 2.0, Microsoft .NET Framework 3.0, Microsoft .NET Framework 4.7.1, Microsoft .NET Framework 4.6/4.6.1/4.6.2/4.7/4.7.1, Microsoft .NET Framework 4.5.2, Microsoft .NET Framework 4.7/4.7.1, Microsoft .NET Framework 4.6, Microsoft .NET Framework 3.5, Microsoft .NET Framework 3.5.1, Microsoft .NET Framework 4.6/4.6.1/4.6.2, Microsoft .NET Framework 4.6.2/4.7/4.7.1, .NET Core 2.0, Microsoft .NET Framework 4.7.2.
***
Step up your Open Source Security Game with WhiteSource [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)",0,cve high detected in system security cryptography xml nupkg autoclosed cve high severity vulnerability vulnerable library system security cryptography xml nupkg provides classes to support the creation and validation of xml digital signatures the classes in th library home page a href path to dependency file mvcareasforepiserver src mvcareasforepiserver mvcareasforepiserver csproj path to vulnerable library dotnet ftzgbk system security cryptography xml system security cryptography xml nupkg dependency hierarchy x system security cryptography xml nupkg vulnerable library found in head commit a href vulnerability details a denial of service vulnerability exists when net and net core improperly process xml documents aka net and net core denial of service vulnerability this affects microsoft net framework microsoft net framework microsoft net framework microsoft net framework microsoft net framework microsoft net framework microsoft net framework microsoft net framework microsoft net framework microsoft net framework microsoft net framework net core microsoft net framework publish date url a href cvss score details base score metrics exploitability metrics attack vector network attack complexity low privileges required none user interaction none scope unchanged impact metrics confidentiality impact none integrity impact none availability impact high for more information on scores click a href suggested fix type upgrade version origin a href release date fix resolution step up your open source security game with whitesource ,0
7902,4102394750.0,IssuesEvent,2016-06-04 00:50:43,jeff1evesque/machine-learning,https://api.github.com/repos/jeff1evesque/machine-learning,closed,Move arguments in 'setup_tables.py' into 'settings.yaml',build enhancement,"We will move the arguments used for populating `tbl_model_type` into `settings.yaml`. Then, we will respectively reference the yaml attribute within `setup_tables.py`.",1.0,"Move arguments in 'setup_tables.py' into 'settings.yaml' - We will move the arguments used for populating `tbl_model_type` into `settings.yaml`. Then, we will respectively reference the yaml attribute within `setup_tables.py`.",0,move arguments in setup tables py into settings yaml we will move the arguments used for populating tbl model type into settings yaml then we will respectively reference the yaml attribute within setup tables py ,0
672,7752671874.0,IssuesEvent,2018-05-30 21:02:52,Shopify/quilt,https://api.github.com/repos/Shopify/quilt,opened,Set code coverage threshold of 80% for new packages.,automation difficulty: easy polish,"For libraries like these we should expect a high level of coverage. We should formalize this by making our coverage checks require 80%+ coverage and be blocking to deploys.
This would help encourage us not to merge untested packages or fix bugs without adding tests for them.",1.0,"Set code coverage threshold of 80% for new packages. - For libraries like these we should expect a high level of coverage. We should formalize this by making our coverage checks require 80%+ coverage and be blocking to deploys.
This would help encourage us not to merge untested packages or fix bugs without adding tests for them.",1,set code coverage threshold of for new packages for libraries like these we should expect a high level of coverage we should formalize this by making our coverage checks require coverage and be blocking to deploys this would help encourage us not to merge untested packages or fix bugs without adding tests for them ,1
370768,10948562472.0,IssuesEvent,2019-11-26 09:09:16,input-output-hk/jormungandr,https://api.github.com/repos/input-output-hk/jormungandr,closed,`max_number_of_transactions_per_block` does not increase more than 250,Priority - Low enhancement subsys-mempool wontfix,"**Describe the bug**
`max_number_of_transactions_per_block` is limited to 250.
**Mandatory Information**
```
jcli 0.7.0 (HEAD-a93d4f67, release, windows [x86_64]) - [rustc 1.39.0 (4560ea788 2019-11-04)]
jormungandr 0.7.0 (HEAD-a93d4f67, release, windows [x86_64]) - [rustc 1.39.0 (4560ea788 2019-11-04)]
```
**To Reproduce**
Steps to reproduce the behavior:
1. start node1 - `jormungandr ---genesis-block block-0.bin --config node_config.yaml --secret node_secret.yaml`
2. start node2 - `jormungandr --config node_config.yaml --secret node_secret.yaml --genesis-block-hash 3bae53f25be7523ce63c1dc09c9d3b3fbf7dac810e095bff1b8f498fa8de4e4d`
3. extract the scripts in the same folder
4. run script - `bash multipleBashScripts.sh 9001 ed25519e_sk18r7nd20gaxjfgmahyqu2vngv98leqefcdcft2nevcakpf999spx55t4ph8ryqslp6ac7uryekjcqsqzl63rjpmh0k92dvquesweq38cc8a0wc`
**Expected behavior**
""max_number_of_transactions_per_block"" should respect the set value inside genesis file.
**Additional context**
- scenario: 2 stake pool nodes connected together
- in my genesis file
```
- ""max_number_of_transactions_per_block"": 1000
- ""consensus_genesis_praos_active_slot_coeff"": 0.1
- ""slot_duration"": 2
- ""slots_per_epoch"": 110
````
- `multipleBashScripts.sh` is creating 10 accounts that will initiate, in parallel, 100 transactions each to a new account each. There will be ~12 txs per second for ~70 seconds.
- node 1 files
[node1.zip](https://github.com/input-output-hk/jormungandr/files/3852402/node1.zip)
- node 2 files
[node2.zip](https://github.com/input-output-hk/jormungandr/files/3852403/node2.zip)
- scripts
[scripts.zip](https://github.com/input-output-hk/jormungandr/files/3852401/scripts.zip)
- as you can see in the below picture, even there were 530 fragments in Pending, only 250 were included between 2 consecutive blocks

- using the attached python script we can look also at the fragment counts per blocks --> again there is a maximum of 250 fragments per block
```
D:\iohk\otherProjects\jormungandr\scripts\local_cluster>python logs_analyzer.py -l 9001 -t
================= Node 9001 - Transactions per block/epoch ====================
{'InABlock': {'block': '8e56eebc04fa4434561d5fe59ff18e2000e1c78b15943c681c6f09ea5fd8e8de', 'date': '5377.32'}} 250
{'InABlock': {'block': '68e18da55a6b987e95fba371f15cf94374e34f99f1d57598b3e139420fe92178', 'date': '5377.51'}} 250
{'InABlock': {'block': 'decf16be5221815b2756f71eda3a335673b9c9281cab660175ff928902fbe20a', 'date': '5377.58'}} 202
{'InABlock': {'block': '7a57c648275b42fa400c1c51b402df7c4afd8e74d2d6b00b95d1e536109e3ceb', 'date': '5377.12'}} 1
{'InABlock': {'block': '5822659ffcdd772142b1ccfe63993b03581a0b26d0b295b139c46526f973a32b', 'date': '5375.87'}} 1
{'InABlock': {'block': '8ba4482a1eb75a591fc45eec45c36502992c5ad58e520a8065aa0fc862c58914', 'date': '5376.34'}} 1
{'InABlock': {'block': '460f81e742e4129cc8ffbcb3e34361974fedbfcd08ff5251c00e4a0c34e94038', 'date': '5376.66'}} 1
{'InABlock': {'block': '7bb61722e459e5edd2cffe3cae888fe3d37ddbaab372e8073cc32a08e553dbcf', 'date': '5376.56'}} 1
{'InABlock': {'block': '454d439a6ee02cd99919759ea8f47ab3d143ff2d231f809006334c8bf3ff8e23', 'date': '5376.101'}} 1
{'InABlock': {'block': 'c2a4d9ff9966c6ca7b61cced450dddd4238df50d5e82b44ab7efd52b0fb83127', 'date': '5376.87'}} 1
```
",1.0,"`max_number_of_transactions_per_block` does not increase more than 250 - **Describe the bug**
`max_number_of_transactions_per_block` is limited to 250.
**Mandatory Information**
```
jcli 0.7.0 (HEAD-a93d4f67, release, windows [x86_64]) - [rustc 1.39.0 (4560ea788 2019-11-04)]
jormungandr 0.7.0 (HEAD-a93d4f67, release, windows [x86_64]) - [rustc 1.39.0 (4560ea788 2019-11-04)]
```
**To Reproduce**
Steps to reproduce the behavior:
1. start node1 - `jormungandr ---genesis-block block-0.bin --config node_config.yaml --secret node_secret.yaml`
2. start node2 - `jormungandr --config node_config.yaml --secret node_secret.yaml --genesis-block-hash 3bae53f25be7523ce63c1dc09c9d3b3fbf7dac810e095bff1b8f498fa8de4e4d`
3. extract the scripts in the same folder
4. run script - `bash multipleBashScripts.sh 9001 ed25519e_sk18r7nd20gaxjfgmahyqu2vngv98leqefcdcft2nevcakpf999spx55t4ph8ryqslp6ac7uryekjcqsqzl63rjpmh0k92dvquesweq38cc8a0wc`
**Expected behavior**
""max_number_of_transactions_per_block"" should respect the set value inside genesis file.
**Additional context**
- scenario: 2 stake pool nodes connected together
- in my genesis file
```
- ""max_number_of_transactions_per_block"": 1000
- ""consensus_genesis_praos_active_slot_coeff"": 0.1
- ""slot_duration"": 2
- ""slots_per_epoch"": 110
````
- `multipleBashScripts.sh` is creating 10 accounts that will initiate, in parallel, 100 transactions each to a new account each. There will be ~12 txs per second for ~70 seconds.
- node 1 files
[node1.zip](https://github.com/input-output-hk/jormungandr/files/3852402/node1.zip)
- node 2 files
[node2.zip](https://github.com/input-output-hk/jormungandr/files/3852403/node2.zip)
- scripts
[scripts.zip](https://github.com/input-output-hk/jormungandr/files/3852401/scripts.zip)
- as you can see in the below picture, even there were 530 fragments in Pending, only 250 were included between 2 consecutive blocks

- using the attached python script we can look also at the fragment counts per blocks --> again there is a maximum of 250 fragments per block
```
D:\iohk\otherProjects\jormungandr\scripts\local_cluster>python logs_analyzer.py -l 9001 -t
================= Node 9001 - Transactions per block/epoch ====================
{'InABlock': {'block': '8e56eebc04fa4434561d5fe59ff18e2000e1c78b15943c681c6f09ea5fd8e8de', 'date': '5377.32'}} 250
{'InABlock': {'block': '68e18da55a6b987e95fba371f15cf94374e34f99f1d57598b3e139420fe92178', 'date': '5377.51'}} 250
{'InABlock': {'block': 'decf16be5221815b2756f71eda3a335673b9c9281cab660175ff928902fbe20a', 'date': '5377.58'}} 202
{'InABlock': {'block': '7a57c648275b42fa400c1c51b402df7c4afd8e74d2d6b00b95d1e536109e3ceb', 'date': '5377.12'}} 1
{'InABlock': {'block': '5822659ffcdd772142b1ccfe63993b03581a0b26d0b295b139c46526f973a32b', 'date': '5375.87'}} 1
{'InABlock': {'block': '8ba4482a1eb75a591fc45eec45c36502992c5ad58e520a8065aa0fc862c58914', 'date': '5376.34'}} 1
{'InABlock': {'block': '460f81e742e4129cc8ffbcb3e34361974fedbfcd08ff5251c00e4a0c34e94038', 'date': '5376.66'}} 1
{'InABlock': {'block': '7bb61722e459e5edd2cffe3cae888fe3d37ddbaab372e8073cc32a08e553dbcf', 'date': '5376.56'}} 1
{'InABlock': {'block': '454d439a6ee02cd99919759ea8f47ab3d143ff2d231f809006334c8bf3ff8e23', 'date': '5376.101'}} 1
{'InABlock': {'block': 'c2a4d9ff9966c6ca7b61cced450dddd4238df50d5e82b44ab7efd52b0fb83127', 'date': '5376.87'}} 1
```
",0, max number of transactions per block does not increase more than describe the bug max number of transactions per block is limited to mandatory information jcli head release windows jormungandr head release windows to reproduce steps to reproduce the behavior start jormungandr genesis block block bin config node config yaml secret node secret yaml start jormungandr config node config yaml secret node secret yaml genesis block hash extract the scripts in the same folder run script bash multiplebashscripts sh expected behavior max number of transactions per block should respect the set value inside genesis file additional context scenario stake pool nodes connected together in my genesis file max number of transactions per block consensus genesis praos active slot coeff slot duration slots per epoch multiplebashscripts sh is creating accounts that will initiate in parallel transactions each to a new account each there will be txs per second for seconds node files node files scripts as you can see in the below picture even there were fragments in pending only were included between consecutive blocks using the attached python script we can look also at the fragment counts per blocks again there is a maximum of fragments per block d iohk otherprojects jormungandr scripts local cluster python logs analyzer py l t node transactions per block epoch inablock block date inablock block date inablock block date inablock block date inablock block date inablock block date inablock block date inablock block date inablock block date inablock block date ,0
201399,7031010780.0,IssuesEvent,2017-12-26 14:27:06,andresriancho/w3af,https://api.github.com/repos/andresriancho/w3af,opened,RCE via Spring Engine SSTI,easy improvement plugin priority:low,"It would be nice to have a plugin which tests for this vulnerability!
https://hawkinsecurity.com/2017/12/13/rce-via-spring-engine-ssti/",1.0,"RCE via Spring Engine SSTI - It would be nice to have a plugin which tests for this vulnerability!
https://hawkinsecurity.com/2017/12/13/rce-via-spring-engine-ssti/",0,rce via spring engine ssti it would be nice to have a plugin which tests for this vulnerability ,0
821,8299107625.0,IssuesEvent,2018-09-21 00:51:32,Azure/azure-powershell,https://api.github.com/repos/Azure/azure-powershell,opened,Register-AzureRmAutomationDscNode relies on Resources module,Automation automation-dsc,"The code is here:
https://github.com/Azure/azure-powershell/blob/9241c6a9efba0628a20201af2ed3c627b237b9c9/src/ResourceManager/Automation/Commands.Automation/Common/AutomationClientDSC.cs#L847-L860
As you can see, this cmdlet is actually trying to run the Resources module to deploy a resource group. This is not allowed in any of our modules. Instead, the code needs to use our internal Resources SDK. Create one in your PS Automation client, similar to this:
https://github.com/Azure/azure-powershell/blob/9241c6a9efba0628a20201af2ed3c627b237b9c9/src/ResourceManager/RecoveryServices/Commands.RecoveryServices/Common/PSRecoveryServicesClient.cs#L106
Then, you would replace the entire block of creating a PS runspace with a call to that client. It would look similar to this:
https://github.com/Azure/azure-powershell/blob/9241c6a9efba0628a20201af2ed3c627b237b9c9/src/ResourceManager/RecoveryServices/Commands.RecoveryServices/Common/PSRecoveryServicesVaultClient.cs#L83-L84
Except, since you want to create a resource group, you would use the `CreateOrUpdateWithHttpMessagesAsync` method.
This **needs to be changed** or else this cmdlet does not work independently. Meaning, it only works as part of `AzureRM`. Additionally, this cmdlet will not work in `Az` at all, since the cmdlet name uses *AzureRm* to be called. I'd recommend getting this fixed as soon as possible.
If you are aware of this pattern used anywhere else in your cmdlets, those places **must also be fixed**.",2.0,"Register-AzureRmAutomationDscNode relies on Resources module - The code is here:
https://github.com/Azure/azure-powershell/blob/9241c6a9efba0628a20201af2ed3c627b237b9c9/src/ResourceManager/Automation/Commands.Automation/Common/AutomationClientDSC.cs#L847-L860
As you can see, this cmdlet is actually trying to run the Resources module to deploy a resource group. This is not allowed in any of our modules. Instead, the code needs to use our internal Resources SDK. Create one in your PS Automation client, similar to this:
https://github.com/Azure/azure-powershell/blob/9241c6a9efba0628a20201af2ed3c627b237b9c9/src/ResourceManager/RecoveryServices/Commands.RecoveryServices/Common/PSRecoveryServicesClient.cs#L106
Then, you would replace the entire block of creating a PS runspace with a call to that client. It would look similar to this:
https://github.com/Azure/azure-powershell/blob/9241c6a9efba0628a20201af2ed3c627b237b9c9/src/ResourceManager/RecoveryServices/Commands.RecoveryServices/Common/PSRecoveryServicesVaultClient.cs#L83-L84
Except, since you want to create a resource group, you would use the `CreateOrUpdateWithHttpMessagesAsync` method.
This **needs to be changed** or else this cmdlet does not work independently. Meaning, it only works as part of `AzureRM`. Additionally, this cmdlet will not work in `Az` at all, since the cmdlet name uses *AzureRm* to be called. I'd recommend getting this fixed as soon as possible.
If you are aware of this pattern used anywhere else in your cmdlets, those places **must also be fixed**.",1,register azurermautomationdscnode relies on resources module the code is here as you can see this cmdlet is actually trying to run the resources module to deploy a resource group this is not allowed in any of our modules instead the code needs to use our internal resources sdk create one in your ps automation client similar to this then you would replace the entire block of creating a ps runspace with a call to that client it would look similar to this except since you want to create a resource group you would use the createorupdatewithhttpmessagesasync method this needs to be changed or else this cmdlet does not work independently meaning it only works as part of azurerm additionally this cmdlet will not work in az at all since the cmdlet name uses azurerm to be called i d recommend getting this fixed as soon as possible if you are aware of this pattern used anywhere else in your cmdlets those places must also be fixed ,1
5442,19604874410.0,IssuesEvent,2022-01-06 08:07:27,tikv/tikv,https://api.github.com/repos/tikv/tikv,closed,tikv have not logs saved in k8s ,type/bug severity/major found/automation,"## Bug Report
### What version of TiKV are you using?
/ # ./tikv-server -V
TiKV
Release Version: 5.4.0-alpha
Edition: Community
Git Commit Hash: 99b3436
Git Commit Branch: heads/refs/tags/v5.4.0-nightly
UTC Build Time: 2022-01-04 01:15:55
Rust Version: rustc 1.56.0-nightly (2faabf579 2021-07-27)
Enable Features: jemalloc mem-profiling portable sse test-engines-rocksdb cloud-aws cloud-gcp cloud-azure
Profile: dist_release
### What operating system and CPU are you using?
8core 16G
### Steps to reproduce
no matter
### What did you expect?
tikv logs can be saved
### What did happened?
tikv have not logs saved in k8s

",1.0,"tikv have not logs saved in k8s - ## Bug Report
### What version of TiKV are you using?
/ # ./tikv-server -V
TiKV
Release Version: 5.4.0-alpha
Edition: Community
Git Commit Hash: 99b3436
Git Commit Branch: heads/refs/tags/v5.4.0-nightly
UTC Build Time: 2022-01-04 01:15:55
Rust Version: rustc 1.56.0-nightly (2faabf579 2021-07-27)
Enable Features: jemalloc mem-profiling portable sse test-engines-rocksdb cloud-aws cloud-gcp cloud-azure
Profile: dist_release
### What operating system and CPU are you using?
8core 16G
### Steps to reproduce
no matter
### What did you expect?
tikv logs can be saved
### What did happened?
tikv have not logs saved in k8s

",1,tikv have not logs saved in bug report what version of tikv are you using tikv server v tikv release version alpha edition community git commit hash git commit branch heads refs tags nightly utc build time rust version rustc nightly enable features jemalloc mem profiling portable sse test engines rocksdb cloud aws cloud gcp cloud azure profile dist release what operating system and cpu are you using steps to reproduce no matter what did you expect tikv logs can be saved what did happened tikv have not logs saved in ,1
9197,27712655066.0,IssuesEvent,2023-03-14 15:07:10,githubcustomers/discovery.co.za,https://api.github.com/repos/githubcustomers/discovery.co.za,opened,Task One: Getting Started,ghas-trial automation Important,"# Task One: Getting Started
Before following these steps, make sure you have understood and are happy with all the pre-requisites that need to be completed within the pre-requisites section of the project board. Once happy carry on below.
Below you will find some helpful links for getting started with your GitHub Advanced Security Proof of Concept. :fireworks:
- [Configuring CodeQL](https://docs.github.com/en/github/finding-security-vulnerabilities-and-errors-in-your-code/configuring-codeql-code-scanning-in-your-ci-system)
- [Running additional queries](https://docs.github.com/en/github/finding-security-vulnerabilities-and-errors-in-your-code/configuring-code-scanning#running-additional-queries)
- [CodeQL CLI Docs](https://codeql.github.com/docs/codeql-cli/getting-started-with-the-codeql-cli)
- [Integrating other tools with GHAS](https://docs.github.com/en/enterprise-cloud@latest/code-security/code-scanning/integrating-with-code-scanning/about-integration-with-code-scanning)
- [Running in your CI System](https://docs.github.com/en/github/finding-security-vulnerabilities-and-errors-in-your-code/running-codeql-code-scanning-in-your-ci-system)
- [GitHub/Microsoft Queries for Solarigate](https://www.microsoft.com/security/blog/2021/02/25/microsoft-open-sources-codeql-queries-used-to-hunt-for-solorigate-activity)
- [CWE Query Mapping Documentation](https://codeql.github.com/codeql-query-help/codeql-cwe-coverage)
Multiple issues have been created to help guide you along with this POC. This issue should align with the strategic goals you made as part of the pre-req.
It helps to run some of these tasks in order; we recommend you follow the below:
- Task One: Enabling Code Scanning and Secret Scanning
- Task Two: Run default code-scanning queries
- Task Three: Run additional code-scanning queries
- Task Four: Configuring CodeQL Scans
- Task Five: Establish Continuous Application Security Scanning
- Task Six: Render results of other SARIF-based SAST tools directly within the GitHub UI (If Required)
- Task Seven: Compare Other SAST and CodeQL Results
- Task Eight: Bulk Enabling Code Scanning across multiple Repositories Quickly
- Task Nine: Developer Experience Task
- Task Ten: Core Language Support for your Organisation
- Task Eleven: Parallel scans
- Task Twelve: Detection of secret keys from known token formats committed to private repositories
- Task Thirteen: Secret Scanning Integration
- Task Fourteen: Test Custom Token Expressions
The final task, collect some informal feedback. This is great to help understand how developers have found using the tool during the PoC. Information on this task can be found here:
- Task Fifteen: Capture discussion about secure code development decisions
",1.0,"Task One: Getting Started - # Task One: Getting Started
Before following these steps, make sure you have understood and are happy with all the pre-requisites that need to be completed within the pre-requisites section of the project board. Once happy carry on below.
Below you will find some helpful links for getting started with your GitHub Advanced Security Proof of Concept. :fireworks:
- [Configuring CodeQL](https://docs.github.com/en/github/finding-security-vulnerabilities-and-errors-in-your-code/configuring-codeql-code-scanning-in-your-ci-system)
- [Running additional queries](https://docs.github.com/en/github/finding-security-vulnerabilities-and-errors-in-your-code/configuring-code-scanning#running-additional-queries)
- [CodeQL CLI Docs](https://codeql.github.com/docs/codeql-cli/getting-started-with-the-codeql-cli)
- [Integrating other tools with GHAS](https://docs.github.com/en/enterprise-cloud@latest/code-security/code-scanning/integrating-with-code-scanning/about-integration-with-code-scanning)
- [Running in your CI System](https://docs.github.com/en/github/finding-security-vulnerabilities-and-errors-in-your-code/running-codeql-code-scanning-in-your-ci-system)
- [GitHub/Microsoft Queries for Solarigate](https://www.microsoft.com/security/blog/2021/02/25/microsoft-open-sources-codeql-queries-used-to-hunt-for-solorigate-activity)
- [CWE Query Mapping Documentation](https://codeql.github.com/codeql-query-help/codeql-cwe-coverage)
Multiple issues have been created to help guide you along with this POC. This issue should align with the strategic goals you made as part of the pre-req.
It helps to run some of these tasks in order; we recommend you follow the below:
- Task One: Enabling Code Scanning and Secret Scanning
- Task Two: Run default code-scanning queries
- Task Three: Run additional code-scanning queries
- Task Four: Configuring CodeQL Scans
- Task Five: Establish Continuous Application Security Scanning
- Task Six: Render results of other SARIF-based SAST tools directly within the GitHub UI (If Required)
- Task Seven: Compare Other SAST and CodeQL Results
- Task Eight: Bulk Enabling Code Scanning across multiple Repositories Quickly
- Task Nine: Developer Experience Task
- Task Ten: Core Language Support for your Organisation
- Task Eleven: Parallel scans
- Task Twelve: Detection of secret keys from known token formats committed to private repositories
- Task Thirteen: Secret Scanning Integration
- Task Fourteen: Test Custom Token Expressions
The final task, collect some informal feedback. This is great to help understand how developers have found using the tool during the PoC. Information on this task can be found here:
- Task Fifteen: Capture discussion about secure code development decisions
",1,task one getting started task one getting started before following these steps make sure you have understood and are happy with all the pre requisites that need to be completed within the pre requisites section of the project board once happy carry on below below you will find some helpful links for getting started with your github advanced security proof of concept fireworks multiple issues have been created to help guide you along with this poc this issue should align with the strategic goals you made as part of the pre req it helps to run some of these tasks in order we recommend you follow the below task one enabling code scanning and secret scanning task two run default code scanning queries task three run additional code scanning queries task four configuring codeql scans task five establish continuous application security scanning task six render results of other sarif based sast tools directly within the github ui if required task seven compare other sast and codeql results task eight bulk enabling code scanning across multiple repositories quickly task nine developer experience task task ten core language support for your organisation task eleven parallel scans task twelve detection of secret keys from known token formats committed to private repositories task thirteen secret scanning integration task fourteen test custom token expressions the final task collect some informal feedback this is great to help understand how developers have found using the tool during the poc information on this task can be found here task fifteen capture discussion about secure code development decisions ,1
8123,26214328753.0,IssuesEvent,2023-01-04 09:41:43,apimatic/core-interfaces-python,https://api.github.com/repos/apimatic/core-interfaces-python,closed,Update PYPI package deployment script,automation,The task is to update the PYPI package deployment script in order to use an environment and also automate the tag and changelog creation. ,1.0,Update PYPI package deployment script - The task is to update the PYPI package deployment script in order to use an environment and also automate the tag and changelog creation. ,1,update pypi package deployment script the task is to update the pypi package deployment script in order to use an environment and also automate the tag and changelog creation ,1
8826,27172301712.0,IssuesEvent,2023-02-17 20:39:09,OneDrive/onedrive-api-docs,https://api.github.com/repos/OneDrive/onedrive-api-docs,closed,Graph API does not give updated lastModifiedTime for few events ,Needs: Triage :mag: area:Scan Guidance automation:Closed,"
#### Category
- [ ] Question
- [ ] Documentation issue
- [ ] **Bug**
When i create or update a document in Onedrive, i do get an updated lastModifiedTime every time but when i share/unshare file and call API, lastModifiedTime does not change and remain same as last one
Steps to Reproduce
Call https://graph.microsoft.com/v1.0/users/{user_id}drive/root/delta and move until delta url found
1- Upload a file
call above API with delta link and notice lastModifiedTime
2- share same file after few seconds or minutes
call above API with delta link and notice lastModifiedTime (lastModifiedTime does not change and remain same as last event)
[ ]: http://aka.ms/onedrive-api-issues
[x]: http://aka.ms/onedrive-api-issues",1.0,"Graph API does not give updated lastModifiedTime for few events -
#### Category
- [ ] Question
- [ ] Documentation issue
- [ ] **Bug**
When i create or update a document in Onedrive, i do get an updated lastModifiedTime every time but when i share/unshare file and call API, lastModifiedTime does not change and remain same as last one
Steps to Reproduce
Call https://graph.microsoft.com/v1.0/users/{user_id}drive/root/delta and move until delta url found
1- Upload a file
call above API with delta link and notice lastModifiedTime
2- share same file after few seconds or minutes
call above API with delta link and notice lastModifiedTime (lastModifiedTime does not change and remain same as last event)
[ ]: http://aka.ms/onedrive-api-issues
[x]: http://aka.ms/onedrive-api-issues",1,graph api does not give updated lastmodifiedtime for few events category question documentation issue bug when i create or update a document in onedrive i do get an updated lastmodifiedtime every time but when i share unshare file and call api lastmodifiedtime does not change and remain same as last one steps to reproduce call and move until delta url found upload a file call above api with delta link and notice lastmodifiedtime share same file after few seconds or minutes call above api with delta link and notice lastmodifiedtime lastmodifiedtime does not change and remain same as last event ,1
1552,10325717937.0,IssuesEvent,2019-09-01 19:43:18,a-t-0/Taskwarrior-installation,https://api.github.com/repos/a-t-0/Taskwarrior-installation,opened,Replace hardCoded current path in `TaskwarriorInstaller.ps1`,Automation Quality Robustness bug,"Currently the path:`/mnt/c/twInstall/Taskwarrior-installation/AutoInstallTaskwarrior/src/main/resources/autoinstalltaskwarrior/` is still `hardCoded`. Replace this with a variable, based on the current path as given by a powershell command.
",1.0,"Replace hardCoded current path in `TaskwarriorInstaller.ps1` - Currently the path:`/mnt/c/twInstall/Taskwarrior-installation/AutoInstallTaskwarrior/src/main/resources/autoinstalltaskwarrior/` is still `hardCoded`. Replace this with a variable, based on the current path as given by a powershell command.
",1,replace hardcoded current path in taskwarriorinstaller currently the path mnt c twinstall taskwarrior installation autoinstalltaskwarrior src main resources autoinstalltaskwarrior is still hardcoded replace this with a variable based on the current path as given by a powershell command ,1
144141,11596117744.0,IssuesEvent,2020-02-24 18:19:12,warfare-plugins/social-warfare,https://api.github.com/repos/warfare-plugins/social-warfare,reopened,Clean Out Pin Buttons wraps content in DOCTYPE/HTML wrapper,COMPLETE: Needs Tested ROUTINE: Maintenance,"Reported at:
https://wordpress.org/support/topic/clean-out-pin-buttons-wraps-content-in-doctype-html-wrapper/
TL;DR – your clean_out_pin_buttons() function in lib/utilities/SWP_Compatibility.php needs to be updated so it doesn’t wrap ‘the_content’ in DOCTYPE and HTML tags. Change your call to loadHTML() so that it uses the LIBXML_HTML_NOIMPLIED and LIBXML_HTML_NODEFDTD options.
I was troubleshooting various issues with a site today where the DIVI mobile menu wouldn’t work on Chrome (did work on FireFox) and it appeared like some scripts and styles were being duplicated. The site is using WP Rocket and when I disabled WP Rocket the issues went away. First I thought it was a javascript combining/minification issue and spent hours looking at that side of it. Nothing seemed to fix the problem, except if I disabled Social Warfare.
So that got me looking at the interaction between Social Warfare and WP Rocket. When WP Rocket is enabled, it combines/minifies the javascript and appends it to the content just before the closing ‘’ tag. When I looked at the page HTML, I found that WP Rocket was including the combined/minified script TWICE in the file. Looking closer, I noticed a stray ‘